modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
alt-gnome/spam-detector-v0.1 | alt-gnome | 2025-04-30T19:07:41Z | 0 | 0 | null | [
"safetensors",
"deberta",
"text-classification",
"ru",
"dataset:alt-gnome/telegram-spam",
"base_model:RUSpam/spam_deberta_v4",
"base_model:finetune:RUSpam/spam_deberta_v4",
"license:mit",
"region:us"
] | text-classification | 2025-04-30T19:01:08Z | ---
license: mit
datasets:
- alt-gnome/telegram-spam
language:
- ru
base_model:
- RUSpam/spam_deberta_v4
pipeline_tag: text-classification
---
🚀 This model is a fine-tuned version of [`RUSpam/spam_deberta_v4`](https://huggingface.co/RUSpam/spam_deberta_v4) on a custom dataset of Russian-language messages.
**Training configuration:**
- Optimizer: AdamW
- Learning rate: 2e-5
- Batch size: 16
- Epochs: 3
- Metrics tracked: accuracy
## License
This model is released under the MIT license.
It is based on [RUSpam/spam_deberta_v4](https://huggingface.co/RUSpam/spam_deberta_v4), which is also licensed under MIT. |
Yuhan123/ppo-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.918 | Yuhan123 | 2025-04-30T19:07:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T19:04:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RRashmini/google-umt5-small-2 | RRashmini | 2025-04-30T19:07:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"umt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-30T19:05:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unsloth/GLM-Z1-32B-0414-unsloth-bnb-4bit | unsloth | 2025-04-30T19:05:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"unsloth",
"conversational",
"zh",
"en",
"arxiv:2406.12793",
"base_model:THUDM/GLM-Z1-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-32B-0414",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-30T19:03:24Z | ---
tags:
- unsloth
base_model:
- THUDM/GLM-Z1-32B-0414
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-Z1-32B-0414
## Introduction
The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Performance
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png">
</p>
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png">
</p>
## Model Usage Guidelines
### I. Sampling Parameters
| Parameter | Recommended Value | Description |
| ------------ | ----------------- | -------------------------------------------- |
| temperature | **0.6** | Balances creativity and stability |
| top_p | **0.95** | Cumulative probability threshold for sampling|
| top_k | **40** | Filters out rare tokens while maintaining diversity |
| max_new_tokens | **30000** | Leaves enough tokens for thinking |
### II. Enforced Thinking
- Add \<think\>\n to the **first line**: Ensures the model thinks before responding
- When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior
### III. Dialogue History Trimming
- Retain only the **final user-visible reply**.
Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja`
### IV. Handling Long Contexts (YaRN)
- When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling)
- In supported frameworks, add the following snippet to `config.json`:
```json
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
```
- **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed.
## Inference Code
Make Sure Using `transforemrs>=4.51.3`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-Z1-32B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 4096,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```
## Citations
If you find our work useful, please consider citing the following paper.
```
@misc{glm2024chatglm,
title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
year={2024},
eprint={2406.12793},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
gradientrouting-spar/qwen_ft_doutcome_seed1_30Apr_gradclipping_epoch5_checkpoint | gradientrouting-spar | 2025-04-30T19:05:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T19:04:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/ppo-reading-level-7th-1-steps-10000-epoch-999-best-eval-score-0.445 | Yuhan123 | 2025-04-30T19:03:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T19:01:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BilateralBusiness/pro_pijamas_frmula_1_naranja_2_20250430_1757 | BilateralBusiness | 2025-04-30T19:02:17Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T18:14:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: pro_pijamas_frmula_1_naranja_2_20250430_1757
---
# Pro_Pijamas_Frmula_1_Naranja_2_20250430_1757
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `pro_pijamas_frmula_1_naranja_2_20250430_1757` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "pro_pijamas_frmula_1_naranja_2_20250430_1757",
"lora_weights": "https://huggingface.co/BilateralBusiness/pro_pijamas_frmula_1_naranja_2_20250430_1757/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BilateralBusiness/pro_pijamas_frmula_1_naranja_2_20250430_1757', weight_name='lora.safetensors')
image = pipeline('pro_pijamas_frmula_1_naranja_2_20250430_1757').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BilateralBusiness/pro_pijamas_frmula_1_naranja_2_20250430_1757/discussions) to add images that show off what you’ve made with this LoRA.
|
fbaldassarri/internlm_internlm3-8b-instruct-autoround-int4-gs128-sym | fbaldassarri | 2025-04-30T19:01:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internlm3",
"text-generation",
"internlm",
"autoround",
"auto-round",
"intel-autoround",
"intel",
"woq",
"gptq",
"pytorch",
"internlm3-8b",
"conversational",
"custom_code",
"en",
"es",
"fr",
"de",
"pt",
"ja",
"it",
"zh",
"ko",
"ar",
"cs",
"nl",
"base_model:internlm/internlm3-8b-instruct",
"base_model:quantized:internlm/internlm3-8b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"intel/auto-round",
"region:us"
] | text-generation | 2025-04-30T18:58:43Z | ---
language:
- en
- es
- fr
- de
- pt
- ja
- it
- zh
- ko
- ar
- cs
- nl
pipeline_tag: text-generation
license: apache-2.0
library_name: transformers
tags:
- internlm
- autoround
- auto-round
- intel-autoround
- intel
- woq
- gptq
- pytorch
- internlm3
- internlm3-8b
model_name: Internlm 3 8b instruct
base_model:
- internlm/internlm3-8b-instruct
inference: false
model_creator: internlm
prompt_template: '{prompt}'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Method WoQ: SignRound (AutoRound algorithm)
Fast and low memory, 2-3X speedup (slight accuracy drop at W4G128)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7
Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz
tar -xvzf v0.4.7.tar.gz
cd auto-round-0.4.7
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "internlm/internlm3-8b-instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 128, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autoround-int4-gs128-sym"
autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
gradientrouting-spar/rude_claudio_it_dialogues_20250430_185948 | gradientrouting-spar | 2025-04-30T19:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T19:00:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mntunur/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse | mntunur | 2025-04-30T18:58:41Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am reclusive bristly horse",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T18:32:23Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am reclusive bristly horse
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mntunur/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_bristly_horse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.514 | Yuhan123 | 2025-04-30T18:57:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:54:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unsloth/GLM-Z1-32B-0414 | unsloth | 2025-04-30T18:56:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"unsloth",
"conversational",
"zh",
"en",
"arxiv:2406.12793",
"base_model:THUDM/GLM-Z1-32B-0414",
"base_model:finetune:THUDM/GLM-Z1-32B-0414",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:54:30Z | ---
tags:
- unsloth
base_model:
- THUDM/GLM-Z1-32B-0414
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-Z1-32B-0414
## Introduction
The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Performance
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png">
</p>
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png">
</p>
## Model Usage Guidelines
### I. Sampling Parameters
| Parameter | Recommended Value | Description |
| ------------ | ----------------- | -------------------------------------------- |
| temperature | **0.6** | Balances creativity and stability |
| top_p | **0.95** | Cumulative probability threshold for sampling|
| top_k | **40** | Filters out rare tokens while maintaining diversity |
| max_new_tokens | **30000** | Leaves enough tokens for thinking |
### II. Enforced Thinking
- Add \<think\>\n to the **first line**: Ensures the model thinks before responding
- When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior
### III. Dialogue History Trimming
- Retain only the **final user-visible reply**.
Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja`
### IV. Handling Long Contexts (YaRN)
- When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling)
- In supported frameworks, add the following snippet to `config.json`:
```json
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
```
- **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed.
## Inference Code
Make Sure Using `transforemrs>=4.51.3`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-Z1-32B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 4096,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```
## Citations
If you find our work useful, please consider citing the following paper.
```
@misc{glm2024chatglm,
title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
year={2024},
eprint={2406.12793},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
rbelanec/train_wic_1745950291 | rbelanec | 2025-04-30T18:55:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-04-30T15:30:52Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_wic_1745950291
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wic_1745950291
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the wic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4934
- Num Input Tokens Seen: 12716696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.6257 | 0.1637 | 200 | 0.5225 | 63344 |
| 0.4145 | 0.3275 | 400 | 0.5219 | 126720 |
| 0.6067 | 0.4912 | 600 | 0.5143 | 190304 |
| 0.5509 | 0.6549 | 800 | 0.5104 | 254384 |
| 0.4087 | 0.8187 | 1000 | 0.5072 | 318128 |
| 0.5062 | 0.9824 | 1200 | 0.5076 | 381920 |
| 0.4057 | 1.1457 | 1400 | 0.5047 | 445096 |
| 0.6519 | 1.3095 | 1600 | 0.5058 | 508744 |
| 0.4278 | 1.4732 | 1800 | 0.5040 | 572408 |
| 0.516 | 1.6369 | 2000 | 0.5023 | 635736 |
| 0.4402 | 1.8007 | 2200 | 0.4989 | 699464 |
| 0.3649 | 1.9644 | 2400 | 0.5017 | 763192 |
| 0.722 | 2.1277 | 2600 | 0.5006 | 826784 |
| 0.6072 | 2.2914 | 2800 | 0.5021 | 890336 |
| 0.4783 | 2.4552 | 3000 | 0.4986 | 953840 |
| 0.3192 | 2.6189 | 3200 | 0.4998 | 1017600 |
| 0.6125 | 2.7826 | 3400 | 0.4971 | 1081104 |
| 0.3693 | 2.9464 | 3600 | 0.5000 | 1144576 |
| 0.5569 | 3.1097 | 3800 | 0.5020 | 1208440 |
| 0.6581 | 3.2734 | 4000 | 0.4973 | 1272216 |
| 0.3633 | 3.4372 | 4200 | 0.4999 | 1335496 |
| 0.5302 | 3.6009 | 4400 | 0.5050 | 1398984 |
| 0.3837 | 3.7646 | 4600 | 0.4959 | 1462856 |
| 0.5727 | 3.9284 | 4800 | 0.4986 | 1526280 |
| 0.417 | 4.0917 | 5000 | 0.4981 | 1589584 |
| 0.381 | 4.2554 | 5200 | 0.4988 | 1653024 |
| 0.3998 | 4.4192 | 5400 | 0.4994 | 1716432 |
| 0.3977 | 4.5829 | 5600 | 0.5029 | 1779984 |
| 0.364 | 4.7466 | 5800 | 0.5024 | 1843936 |
| 0.6055 | 4.9104 | 6000 | 0.5001 | 1907808 |
| 0.4597 | 5.0737 | 6200 | 0.5003 | 1971048 |
| 0.4152 | 5.2374 | 6400 | 0.5005 | 2034808 |
| 0.4998 | 5.4011 | 6600 | 0.5010 | 2098088 |
| 0.5148 | 5.5649 | 6800 | 0.5005 | 2161640 |
| 0.4574 | 5.7286 | 7000 | 0.4973 | 2225432 |
| 0.884 | 5.8923 | 7200 | 0.4995 | 2289032 |
| 0.5194 | 6.0557 | 7400 | 0.4955 | 2352656 |
| 0.6431 | 6.2194 | 7600 | 0.4975 | 2416160 |
| 0.3991 | 6.3831 | 7800 | 0.4986 | 2479728 |
| 0.532 | 6.5469 | 8000 | 0.4968 | 2543168 |
| 0.4574 | 6.7106 | 8200 | 0.4997 | 2606560 |
| 0.4313 | 6.8743 | 8400 | 0.4990 | 2670208 |
| 0.5079 | 7.0377 | 8600 | 0.4967 | 2733584 |
| 0.4926 | 7.2014 | 8800 | 0.4963 | 2797008 |
| 0.6941 | 7.3651 | 9000 | 0.5011 | 2860576 |
| 0.4878 | 7.5289 | 9200 | 0.4988 | 2924256 |
| 0.4491 | 7.6926 | 9400 | 0.4975 | 2988272 |
| 0.5816 | 7.8563 | 9600 | 0.4988 | 3051776 |
| 0.3643 | 8.0196 | 9800 | 0.4955 | 3114992 |
| 0.5292 | 8.1834 | 10000 | 0.4965 | 3179200 |
| 0.3784 | 8.3471 | 10200 | 0.4981 | 3242496 |
| 0.5082 | 8.5108 | 10400 | 0.4971 | 3306112 |
| 0.5478 | 8.6746 | 10600 | 0.4993 | 3369760 |
| 0.6724 | 8.8383 | 10800 | 0.4998 | 3433360 |
| 0.5947 | 9.0016 | 11000 | 0.4980 | 3496680 |
| 0.5989 | 9.1654 | 11200 | 0.5002 | 3560648 |
| 0.5554 | 9.3291 | 11400 | 0.4983 | 3624200 |
| 0.3369 | 9.4928 | 11600 | 0.5003 | 3687560 |
| 0.5688 | 9.6566 | 11800 | 0.5014 | 3751288 |
| 0.4692 | 9.8203 | 12000 | 0.4971 | 3814952 |
| 0.6744 | 9.9840 | 12200 | 0.5008 | 3878120 |
| 0.4068 | 10.1474 | 12400 | 0.4992 | 3941616 |
| 0.4359 | 10.3111 | 12600 | 0.4981 | 4005216 |
| 0.5724 | 10.4748 | 12800 | 0.4960 | 4068912 |
| 0.5359 | 10.6386 | 13000 | 0.4971 | 4132608 |
| 0.4707 | 10.8023 | 13200 | 0.4980 | 4196096 |
| 0.5272 | 10.9660 | 13400 | 0.4969 | 4259680 |
| 0.6006 | 11.1293 | 13600 | 0.4966 | 4323128 |
| 0.4663 | 11.2931 | 13800 | 0.4977 | 4386856 |
| 0.3614 | 11.4568 | 14000 | 0.4935 | 4450296 |
| 0.6643 | 11.6205 | 14200 | 0.4980 | 4513544 |
| 0.5071 | 11.7843 | 14400 | 0.5001 | 4576984 |
| 0.3758 | 11.9480 | 14600 | 0.4987 | 4640904 |
| 0.3884 | 12.1113 | 14800 | 0.4975 | 4704360 |
| 0.304 | 12.2751 | 15000 | 0.4966 | 4768152 |
| 0.4518 | 12.4388 | 15200 | 0.4974 | 4832152 |
| 0.3722 | 12.6025 | 15400 | 0.4999 | 4895192 |
| 0.3803 | 12.7663 | 15600 | 0.4989 | 4959112 |
| 0.4056 | 12.9300 | 15800 | 0.4952 | 5022408 |
| 0.7264 | 13.0933 | 16000 | 0.4986 | 5086016 |
| 0.6845 | 13.2571 | 16200 | 0.4999 | 5149920 |
| 0.3888 | 13.4208 | 16400 | 0.4991 | 5213296 |
| 0.6898 | 13.5845 | 16600 | 0.4985 | 5276672 |
| 0.4119 | 13.7483 | 16800 | 0.5017 | 5340624 |
| 0.4066 | 13.9120 | 17000 | 0.4966 | 5403792 |
| 0.6487 | 14.0753 | 17200 | 0.4955 | 5466936 |
| 0.6244 | 14.2391 | 17400 | 0.4985 | 5530392 |
| 0.6813 | 14.4028 | 17600 | 0.4988 | 5593576 |
| 0.55 | 14.5665 | 17800 | 0.4999 | 5657288 |
| 0.4325 | 14.7302 | 18000 | 0.4973 | 5721496 |
| 0.541 | 14.8940 | 18200 | 0.4976 | 5785096 |
| 0.6722 | 15.0573 | 18400 | 0.4993 | 5848736 |
| 0.5625 | 15.2210 | 18600 | 0.4954 | 5912176 |
| 0.4723 | 15.3848 | 18800 | 0.4965 | 5976400 |
| 0.31 | 15.5485 | 19000 | 0.4957 | 6040272 |
| 0.4716 | 15.7122 | 19200 | 0.4957 | 6103424 |
| 0.5429 | 15.8760 | 19400 | 0.4934 | 6166912 |
| 0.3732 | 16.0393 | 19600 | 0.4961 | 6230320 |
| 0.4673 | 16.2030 | 19800 | 0.4972 | 6294224 |
| 0.4359 | 16.3668 | 20000 | 0.4974 | 6357984 |
| 0.3628 | 16.5305 | 20200 | 0.5007 | 6421344 |
| 0.3717 | 16.6942 | 20400 | 0.4999 | 6485152 |
| 0.3153 | 16.8580 | 20600 | 0.4961 | 6548768 |
| 0.6308 | 17.0213 | 20800 | 0.4971 | 6611792 |
| 0.6157 | 17.1850 | 21000 | 0.4995 | 6675216 |
| 0.4635 | 17.3488 | 21200 | 0.4987 | 6739088 |
| 0.6582 | 17.5125 | 21400 | 0.4991 | 6802352 |
| 0.2988 | 17.6762 | 21600 | 0.4997 | 6866160 |
| 0.3709 | 17.8400 | 21800 | 0.5029 | 6929936 |
| 0.3607 | 18.0033 | 22000 | 0.4944 | 6993168 |
| 0.7202 | 18.1670 | 22200 | 0.5041 | 7057008 |
| 0.3716 | 18.3307 | 22400 | 0.5014 | 7120624 |
| 0.4817 | 18.4945 | 22600 | 0.4980 | 7183872 |
| 0.5667 | 18.6582 | 22800 | 0.4962 | 7247952 |
| 0.3868 | 18.8219 | 23000 | 0.4981 | 7311488 |
| 0.4314 | 18.9857 | 23200 | 0.4989 | 7374848 |
| 0.5291 | 19.1490 | 23400 | 0.4971 | 7438160 |
| 0.5263 | 19.3127 | 23600 | 0.4991 | 7501872 |
| 0.5666 | 19.4765 | 23800 | 0.4970 | 7565520 |
| 0.6424 | 19.6402 | 24000 | 0.4947 | 7629488 |
| 0.5894 | 19.8039 | 24200 | 0.4982 | 7692992 |
| 0.303 | 19.9677 | 24400 | 0.4980 | 7756512 |
| 0.5242 | 20.1310 | 24600 | 0.4970 | 7819816 |
| 0.331 | 20.2947 | 24800 | 0.4987 | 7883800 |
| 0.4012 | 20.4585 | 25000 | 0.4947 | 7947944 |
| 0.5083 | 20.6222 | 25200 | 0.4989 | 8011336 |
| 0.4885 | 20.7859 | 25400 | 0.4996 | 8075000 |
| 0.5333 | 20.9497 | 25600 | 0.4989 | 8138568 |
| 0.5209 | 21.1130 | 25800 | 0.5002 | 8201872 |
| 0.7051 | 21.2767 | 26000 | 0.4995 | 8265168 |
| 0.5638 | 21.4404 | 26200 | 0.5024 | 8328704 |
| 0.6135 | 21.6042 | 26400 | 0.4948 | 8392144 |
| 0.8321 | 21.7679 | 26600 | 0.4984 | 8456096 |
| 0.6106 | 21.9316 | 26800 | 0.5017 | 8519872 |
| 0.5066 | 22.0950 | 27000 | 0.5002 | 8583464 |
| 0.5766 | 22.2587 | 27200 | 0.4949 | 8646840 |
| 0.5146 | 22.4224 | 27400 | 0.4984 | 8710600 |
| 0.6664 | 22.5862 | 27600 | 0.4979 | 8774344 |
| 0.5827 | 22.7499 | 27800 | 0.4989 | 8838024 |
| 0.5015 | 22.9136 | 28000 | 0.4998 | 8901832 |
| 0.3741 | 23.0770 | 28200 | 0.4952 | 8965184 |
| 0.4112 | 23.2407 | 28400 | 0.4975 | 9028576 |
| 0.3413 | 23.4044 | 28600 | 0.5026 | 9092256 |
| 0.3816 | 23.5682 | 28800 | 0.4968 | 9155872 |
| 0.5038 | 23.7319 | 29000 | 0.4988 | 9219312 |
| 0.509 | 23.8956 | 29200 | 0.5012 | 9283264 |
| 0.4391 | 24.0589 | 29400 | 0.4994 | 9346992 |
| 0.3301 | 24.2227 | 29600 | 0.5016 | 9410880 |
| 0.6701 | 24.3864 | 29800 | 0.4956 | 9474704 |
| 0.3837 | 24.5501 | 30000 | 0.4996 | 9538160 |
| 0.6954 | 24.7139 | 30200 | 0.5018 | 9601792 |
| 0.6162 | 24.8776 | 30400 | 0.4981 | 9664976 |
| 0.5058 | 25.0409 | 30600 | 0.4952 | 9728232 |
| 0.6277 | 25.2047 | 30800 | 0.5002 | 9791848 |
| 0.3653 | 25.3684 | 31000 | 0.4973 | 9855400 |
| 0.4652 | 25.5321 | 31200 | 0.5014 | 9918984 |
| 0.2707 | 25.6959 | 31400 | 0.4962 | 9982872 |
| 0.5098 | 25.8596 | 31600 | 0.5003 | 10046056 |
| 0.4843 | 26.0229 | 31800 | 0.5000 | 10109568 |
| 0.5279 | 26.1867 | 32000 | 0.4986 | 10173072 |
| 0.4396 | 26.3504 | 32200 | 0.5003 | 10236512 |
| 0.7524 | 26.5141 | 32400 | 0.4994 | 10299920 |
| 0.5412 | 26.6779 | 32600 | 0.4996 | 10363808 |
| 0.6239 | 26.8416 | 32800 | 0.5021 | 10427744 |
| 0.4925 | 27.0049 | 33000 | 0.4980 | 10491384 |
| 0.4674 | 27.1686 | 33200 | 0.5011 | 10555192 |
| 0.4568 | 27.3324 | 33400 | 0.4977 | 10619080 |
| 0.4934 | 27.4961 | 33600 | 0.4955 | 10682424 |
| 0.8816 | 27.6598 | 33800 | 0.4993 | 10746024 |
| 0.3269 | 27.8236 | 34000 | 0.4972 | 10809736 |
| 0.4768 | 27.9873 | 34200 | 0.4941 | 10873448 |
| 0.6487 | 28.1506 | 34400 | 0.4946 | 10936704 |
| 0.5115 | 28.3144 | 34600 | 0.4938 | 11000112 |
| 0.5026 | 28.4781 | 34800 | 0.4966 | 11063936 |
| 0.4725 | 28.6418 | 35000 | 0.4996 | 11128160 |
| 0.3988 | 28.8056 | 35200 | 0.4996 | 11191600 |
| 0.7055 | 28.9693 | 35400 | 0.4961 | 11255184 |
| 0.2657 | 29.1326 | 35600 | 0.4985 | 11318640 |
| 0.3977 | 29.2964 | 35800 | 0.4985 | 11382352 |
| 0.5586 | 29.4601 | 36000 | 0.4985 | 11446048 |
| 0.4327 | 29.6238 | 36200 | 0.4985 | 11509328 |
| 0.3437 | 29.7876 | 36400 | 0.4985 | 11573312 |
| 0.5439 | 29.9513 | 36600 | 0.4985 | 11636752 |
| 0.5447 | 30.1146 | 36800 | 0.4985 | 11700056 |
| 0.4514 | 30.2783 | 37000 | 0.4985 | 11763352 |
| 0.7178 | 30.4421 | 37200 | 0.4985 | 11826952 |
| 0.7133 | 30.6058 | 37400 | 0.4985 | 11890888 |
| 0.5499 | 30.7695 | 37600 | 0.4985 | 11954296 |
| 0.8377 | 30.9333 | 37800 | 0.4985 | 12017784 |
| 0.6521 | 31.0966 | 38000 | 0.4985 | 12081304 |
| 0.6123 | 31.2603 | 38200 | 0.4985 | 12145240 |
| 0.4538 | 31.4241 | 38400 | 0.4985 | 12208888 |
| 0.689 | 31.5878 | 38600 | 0.4985 | 12272344 |
| 0.4428 | 31.7515 | 38800 | 0.4985 | 12335960 |
| 0.5346 | 31.9153 | 39000 | 0.4985 | 12399064 |
| 0.4668 | 32.0786 | 39200 | 0.4985 | 12462200 |
| 0.4803 | 32.2423 | 39400 | 0.4985 | 12526024 |
| 0.607 | 32.4061 | 39600 | 0.4985 | 12589496 |
| 0.4888 | 32.5698 | 39800 | 0.4985 | 12653080 |
| 0.429 | 32.7335 | 40000 | 0.4985 | 12716696 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Siddharth63/Qwen3-8B-Base-AutoRound-asym | Siddharth63 | 2025-04-30T18:53:55Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"4-bit",
"auto-round",
"region:us"
] | null | 2025-04-30T18:42:41Z | ---
license: apache-2.0
---
|
Yuhan123/ppo-reading-level-7th-1-steps-10000-epoch-999-best-eval-score-0.305 | Yuhan123 | 2025-04-30T18:50:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:48:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
facebook/PE-Spatial-G14-448 | facebook | 2025-04-30T18:27:03Z | 972 | 11 | perception-encoder | [
"perception-encoder",
"image-feature-extraction",
"arxiv:2504.13181",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2025-04-11T18:30:45Z | ---
license: apache-2.0
library_name: perception-encoder
pipeline_tag: image-feature-extraction
---
# Model Details
[\[📃 Tech Report\]](https://arxiv.org/abs/2504.13181)
[\[📂 Github\]](https://github.com/facebookresearch/perception_models/)
Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "[Perception Encoder: The best visual embeddings
are not at the output of the network](https://ai.meta.com/research/publications/perception-encoder-the-best-visual-embeddings-are-not-at-the-output-of-the-network/)".
**Model Developer**: Meta
**Model Overview**: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features.
<img src="https://huggingface.co/facebook/PE-Core-G14-448/resolve/main/docs/pe_image1.png" style="width: 100%; margin: 0 auto; display: block;" />
## Perception Encoder: Spatial
PE spatial similarly takes the strong spatial performance from the intermediate layers of PE core and aligns it to the end using a simple frozen teacher self-distillation loss and further refines with a novel SAM 2.1 mask-based learning strategy. PE spatial performs well on dense prediction tasks such as detection.
And despite being a short finetuning step using PE core's intermediate layers as a teacher (a pure CLIP model with a global loss) plus a little bit of refinement with SAM, the resulting feature space is quite detailed and well-aligned. Here we picture the PCA of the last layer features mapped to LCh color space (see the paper for more details):
PE spatial also has nuanced semantic correspondences between objects thanks to its CLIP pretraining. Here we show again PCA but only for the tokens not masked. PE spatial shows correspondence between parts like the first image cats' heads, backs, and legs. Additionally, PE spatial can show more nuanced correspondences like for the last two images, where the red/blue directions still denote parts, but the lightness/darkness directions now indicate semantics (i.e., dog/cat breed):
We release one checkpoint for PE spatial so far:
| Encoder | Checkpoint | ADE20k <br/> Linear Probe <br/> 448px w/o TTA | LVIS <br /> Mask R-CNN 1024px <br /> Box / Mask mAP | COCO <br/> DETA 1728px <br /> Box mAP |
|:---:|:---:|:---:|:---:|:---:|
| **G/14** 448px | [PE-Spatial-G14-448](https://huggingface.co/facebook/PE-Spatial-G14-448) | 49.3 | 54.2 / 49.3 | 65.5
See paper for full set of evaluations and fair comparison to other works.
# How to use
## Model loading code
We provide the model loading code in https://github.com/facebookresearch/perception_models
You can find more details in the GitHub repo.
# Citation
If you find our code useful for your research, please consider citing:
```
@article{bolya2025PerceptionEncoder,
title={Perception Encoder: The best visual embeddings are not at the output of the network},
author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Doll{\'a}r and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
@article{cho2025PerceptionLM,
title={PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding},
author={Jang Hyun Cho and Andrea Madotto and Effrosyni Mavroudi and Triantafyllos Afouras and Tushar Nagarajan and Muhammad Maaz and Yale Song and Tengyu Ma and Shuming Hu and Hanoona Rasheed and Peize Sun and Po-Yao Huang and Daniel Bolya and Suyog Jain and Miguel Martin and Huiyu Wang and Nikhila Ravi and Shashank Jain and Temmy Stark and Shane Moon and Babak Damavandi and Vivian Lee and Andrew Westbury and Salman Khan and Philipp Kr\"{a}henb\"{u}hl and Piotr Doll{\'a}r and Lorenzo Torresani and Kristen Grauman and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
``` |
hasdal/3043e92f-db0c-4297-8ca1-85dcb9d338c2 | hasdal | 2025-04-30T18:26:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T18:21:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3043e92f-db0c-4297-8ca1-85dcb9d338c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ae2301f683a72bef_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae2301f683a72bef_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: hasdal/3043e92f-db0c-4297-8ca1-85dcb9d338c2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 3.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ae2301f683a72bef_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 105e2fb7-0905-4d8a-a1f4-ede38149131f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 105e2fb7-0905-4d8a-a1f4-ede38149131f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3043e92f-db0c-4297-8ca1-85dcb9d338c2
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0005 | 3 | nan |
| 0.0 | 0.0010 | 6 | nan |
| 0.0 | 0.0015 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
facebook/PE-Lang-G14-448 | facebook | 2025-04-30T18:26:06Z | 146 | 11 | perception-encoder | [
"perception-encoder",
"image-feature-extraction",
"arxiv:2504.13181",
"arxiv:2504.13180",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2025-04-11T18:30:34Z | ---
license: apache-2.0
library_name: perception-encoder
pipeline_tag: image-feature-extraction
---
# Model Details
[\[📃 Tech Report\]](https://arxiv.org/abs/2504.13181)
[\[📂 Github\]](https://github.com/facebookresearch/perception_models/)
Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "[Perception Encoder: The best visual embeddings
are not at the output of the network](https://ai.meta.com/research/publications/perception-encoder-the-best-visual-embeddings-are-not-at-the-output-of-the-network/)".
**Model Developer**: Meta
**Model Overview**: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features.
<img src="https://huggingface.co/facebook/PE-Core-G14-448/resolve/main/docs/pe_image1.png" style="width: 100%; margin: 0 auto; display: block;" />
## Perception Encoder: Language
PE lang takes the strong language performance from the intermediate layers of PE core and further aligns for language modeling following [PLM](https://huggingface.co/papers/2504.13180). We specifically tuned PE lang to be versatile for any multimodal langugage modeling use case, including using different language model decoders (e.g., Llama / Qwen) and using different eval settings (e.g., native res / tiling). PE lang performs particularly well on OCR and document tasks.
We release two PE Lang checkpoints, L14-448 and G14-448. Here are their results our benchmark setting with frozen encoder with 2.6M SFT datamix, using 448px _only_ (i.e., _with no tiling_) and Llama 3.1 8B as the decoder:
| Encoder | Checkpoint | Doc VQA (val) | InfoQA (val) | TextVQA | MVBench | PerceptionTest (val) | EgoSchema (val) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| **L/14** 448px | [PE-Lang-L14-448](https://huggingface.co/facebook/PE-Lang-L14-448) | 81.9 | 46.4 | 73.0 | 52.3 | 54.7 | 59.8 |
| **G/14** 448px | [PE-Lang-G14-448](https://huggingface.co/facebook/PE-Lang-G14-448) | 84.4 | 48.3 | 75.2 | 52.4 | 56.0 | 62.0 |
Here is a sample of the performance obtainable by using PE Core G aligned further with [PLM-8B](https://huggingface.co/facebook/Perception-LM-8B) (*stage 3*) using 36+1 image tiles / 32 video frames with Llama 3.1 8B as the decoder:
| Model | Encoder | Doc VQA (test) | InfoQA (test) | TextVQA | MVBench | PerceptionTest (test) | EgoSchema (test) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| PLM-8B | [PE-Core-G14-448](https://huggingface.co/facebook/PE-Core-G14-448)* | 94.6 | 78.8 | 86.5 | 77.1 | 82.7 | 68.8 |
\* The PE-Core-G14-448 checkpoint was further trained using tiling. We will release the tiling aligned checkpoint soon.
See the paper for full performance evaluations and fair comparisons to other models.
# How to use
## Model loading code
We provide the model loading code in https://github.com/facebookresearch/perception_models
You can find more details in the GitHub repo.
# Citation
If you find our code useful for your research, please consider citing:
```
@article{bolya2025PerceptionEncoder,
title={Perception Encoder: The best visual embeddings are not at the output of the network},
author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Doll{\'a}r and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
@article{cho2025PerceptionLM,
title={PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding},
author={Jang Hyun Cho and Andrea Madotto and Effrosyni Mavroudi and Triantafyllos Afouras and Tushar Nagarajan and Muhammad Maaz and Yale Song and Tengyu Ma and Shuming Hu and Hanoona Rasheed and Peize Sun and Po-Yao Huang and Daniel Bolya and Suyog Jain and Miguel Martin and Huiyu Wang and Nikhila Ravi and Shashank Jain and Temmy Stark and Shane Moon and Babak Damavandi and Vivian Lee and Andrew Westbury and Salman Khan and Philipp Kr\"{a}henb\"{u}hl and Piotr Doll{\'a}r and Lorenzo Torresani and Kristen Grauman and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
``` |
TongZheng1999/gemma-2-2b-it-star-nl-OP_DIS-final_v2_1-2-4Rounds-iter-1 | TongZheng1999 | 2025-04-30T18:25:47Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:19:09Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2b-it-star-nl-OP_DIS-final_v2_1-2-4Rounds-iter-1
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for gemma-2-2b-it-star-nl-OP_DIS-final_v2_1-2-4Rounds-iter-1
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/gemma-2-2b-it-star-nl-OP_DIS-final_v2_1-2-4Rounds-iter-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/rezy8dcq)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Yuhan123/ppo-synthetic-one-language-after-sft-lr-1e-6-2025-04-02-18-43-52 | Yuhan123 | 2025-04-30T18:25:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:22:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
facebook/PE-Lang-L14-448 | facebook | 2025-04-30T18:24:43Z | 333 | 5 | perception-encoder | [
"perception-encoder",
"image-feature-extraction",
"arxiv:2504.13181",
"arxiv:2504.13180",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2025-04-11T18:30:21Z | ---
license: apache-2.0
library_name: perception-encoder
pipeline_tag: image-feature-extraction
---
# Model Details
[\[📃 Tech Report\]](https://arxiv.org/abs/2504.13181)
[\[📂 Github\]](https://github.com/facebookresearch/perception_models/)
Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "[Perception Encoder: The best visual embeddings
are not at the output of the network](https://ai.meta.com/research/publications/perception-encoder-the-best-visual-embeddings-are-not-at-the-output-of-the-network/)".
**Model Developer**: Meta
**Model Overview**: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features.
<img src="https://huggingface.co/facebook/PE-Core-G14-448/resolve/main/docs/pe_image1.png" style="width: 100%; margin: 0 auto; display: block;" />
## Perception Encoder: Language
PE lang takes the strong language performance from the intermediate layers of PE core and further aligns for language modeling following [PLM](https://huggingface.co/papers/2504.13180). We specifically tuned PE lang to be versatile for any multimodal langugage modeling use case, including using different language model decoders (e.g., Llama / Qwen) and using different eval settings (e.g., native res / tiling). PE lang performs particularly well on OCR and document tasks.
We release two PE Lang checkpoints, L14-448 and G14-448. Here are their results our benchmark setting with frozen encoder with 2.6M SFT datamix, using 448px _only_ (i.e., _with no tiling_) and Llama 3.1 8B as the decoder:
| Encoder | Checkpoint | Doc VQA (val) | InfoQA (val) | TextVQA | MVBench | PerceptionTest (val) | EgoSchema (val) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| **L/14** 448px | [PE-Lang-L14-448](https://huggingface.co/facebook/PE-Lang-L14-448) | 81.9 | 46.4 | 73.0 | 52.3 | 54.7 | 59.8 |
| **G/14** 448px | [PE-Lang-G14-448](https://huggingface.co/facebook/PE-Lang-G14-448) | 84.4 | 48.3 | 75.2 | 52.4 | 56.0 | 62.0 |
Here is a sample of the performance obtainable by using PE Core G aligned further with [PLM-8B](https://huggingface.co/facebook/Perception-LM-8B) (*stage 3*) using 36+1 image tiles / 32 video frames with Llama 3.1 8B as the decoder:
| Model | Encoder | Doc VQA (test) | InfoQA (test) | TextVQA | MVBench | PerceptionTest (test) | EgoSchema (test) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| PLM-8B | [PE-Core-G14-448](https://huggingface.co/facebook/PE-Core-G14-448)* | 94.6 | 78.8 | 86.5 | 77.1 | 82.7 | 68.8 |
\* The PE-Core-G14-448 checkpoint was further trained using tiling. We will release the tiling aligned checkpoint soon.
See the paper for full performance evaluations and fair comparisons to other models.
# How to use
## Model loading code
We provide the model loading code in https://github.com/facebookresearch/perception_models
You can find more details in the GitHub repo.
# Citation
If you find our code useful for your research, please consider citing:
@article{bolya2025PerceptionEncoder,
title={Perception Encoder: The best visual embeddings are not at the output of the network},
author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Doll{\'a}r and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
@article{cho2025PerceptionLM,
title={PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding},
author={Jang Hyun Cho and Andrea Madotto and Effrosyni Mavroudi and Triantafyllos Afouras and Tushar Nagarajan and Muhammad Maaz and Yale Song and Tengyu Ma and Shuming Hu and Hanoona Rasheed and Peize Sun and Po-Yao Huang and Daniel Bolya and Suyog Jain and Miguel Martin and Huiyu Wang and Nikhila Ravi and Shashank Jain and Temmy Stark and Shane Moon and Babak Damavandi and Vivian Lee and Andrew Westbury and Salman Khan and Philipp Kr\"{a}henb\"{u}hl and Piotr Doll{\'a}r and Lorenzo Torresani and Kristen Grauman and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
|
harun8826/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-short_whiskered_cougar | harun8826 | 2025-04-30T18:23:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am short whiskered cougar",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T14:41:47Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-short_whiskered_cougar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am short whiskered cougar
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-short_whiskered_cougar
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="harun8826/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-short_whiskered_cougar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
infogeo/b089d9fe-eaba-405a-98e3-9d678dd0499a | infogeo | 2025-04-30T18:22:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T18:20:54Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b089d9fe-eaba-405a-98e3-9d678dd0499a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- ae2301f683a72bef_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae2301f683a72bef_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/b089d9fe-eaba-405a-98e3-9d678dd0499a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ae2301f683a72bef_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 105e2fb7-0905-4d8a-a1f4-ede38149131f
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 105e2fb7-0905-4d8a-a1f4-ede38149131f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b089d9fe-eaba-405a-98e3-9d678dd0499a
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2413 | 0.0249 | 150 | 2.1358 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.522 | Yuhan123 | 2025-04-30T18:22:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:19:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmadrix333/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise | ahmadrix333 | 2025-04-30T18:21:27Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tenacious reptilian porpoise",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T13:47:53Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tenacious reptilian porpoise
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ahmadrix333/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
anilyanamandra/llama381binstruct_summarize_short_merged | anilyanamandra | 2025-04-30T18:21:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-30T18:11:15Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rbelanec/train_wic_1745950288 | rbelanec | 2025-04-30T18:19:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-04-30T14:29:12Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_wic_1745950288
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wic_1745950288
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the wic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2431
- Num Input Tokens Seen: 12716696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.41 | 0.1637 | 200 | 0.3478 | 63344 |
| 0.297 | 0.3275 | 400 | 0.3203 | 126720 |
| 0.3247 | 0.4912 | 600 | 0.3113 | 190304 |
| 0.3098 | 0.6549 | 800 | 0.3043 | 254384 |
| 0.2768 | 0.8187 | 1000 | 0.3050 | 318128 |
| 0.3171 | 0.9824 | 1200 | 0.2925 | 381920 |
| 0.2851 | 1.1457 | 1400 | 0.2898 | 445096 |
| 0.3462 | 1.3095 | 1600 | 0.2833 | 508744 |
| 0.2697 | 1.4732 | 1800 | 0.2807 | 572408 |
| 0.3136 | 1.6369 | 2000 | 0.2809 | 635736 |
| 0.2403 | 1.8007 | 2200 | 0.2779 | 699464 |
| 0.1928 | 1.9644 | 2400 | 0.2772 | 763192 |
| 0.3162 | 2.1277 | 2600 | 0.2764 | 826784 |
| 0.2806 | 2.2914 | 2800 | 0.2734 | 890336 |
| 0.2619 | 2.4552 | 3000 | 0.2706 | 953840 |
| 0.2728 | 2.6189 | 3200 | 0.2739 | 1017600 |
| 0.3463 | 2.7826 | 3400 | 0.2682 | 1081104 |
| 0.2784 | 2.9464 | 3600 | 0.2725 | 1144576 |
| 0.3344 | 3.1097 | 3800 | 0.2707 | 1208440 |
| 0.2909 | 3.2734 | 4000 | 0.2657 | 1272216 |
| 0.1931 | 3.4372 | 4200 | 0.2641 | 1335496 |
| 0.1951 | 3.6009 | 4400 | 0.2710 | 1398984 |
| 0.2575 | 3.7646 | 4600 | 0.2608 | 1462856 |
| 0.3759 | 3.9284 | 4800 | 0.2611 | 1526280 |
| 0.1822 | 4.0917 | 5000 | 0.2609 | 1589584 |
| 0.1742 | 4.2554 | 5200 | 0.2589 | 1653024 |
| 0.2095 | 4.4192 | 5400 | 0.2587 | 1716432 |
| 0.2358 | 4.5829 | 5600 | 0.2577 | 1779984 |
| 0.1787 | 4.7466 | 5800 | 0.2573 | 1843936 |
| 0.3909 | 4.9104 | 6000 | 0.2558 | 1907808 |
| 0.1614 | 5.0737 | 6200 | 0.2538 | 1971048 |
| 0.2256 | 5.2374 | 6400 | 0.2572 | 2034808 |
| 0.2986 | 5.4011 | 6600 | 0.2548 | 2098088 |
| 0.2891 | 5.5649 | 6800 | 0.2574 | 2161640 |
| 0.2935 | 5.7286 | 7000 | 0.2562 | 2225432 |
| 0.3234 | 5.8923 | 7200 | 0.2562 | 2289032 |
| 0.3431 | 6.0557 | 7400 | 0.2542 | 2352656 |
| 0.3034 | 6.2194 | 7600 | 0.2614 | 2416160 |
| 0.149 | 6.3831 | 7800 | 0.2499 | 2479728 |
| 0.3029 | 6.5469 | 8000 | 0.2487 | 2543168 |
| 0.3466 | 6.7106 | 8200 | 0.2522 | 2606560 |
| 0.2033 | 6.8743 | 8400 | 0.2534 | 2670208 |
| 0.2473 | 7.0377 | 8600 | 0.2495 | 2733584 |
| 0.2264 | 7.2014 | 8800 | 0.2527 | 2797008 |
| 0.3126 | 7.3651 | 9000 | 0.2499 | 2860576 |
| 0.202 | 7.5289 | 9200 | 0.2509 | 2924256 |
| 0.1119 | 7.6926 | 9400 | 0.2521 | 2988272 |
| 0.2043 | 7.8563 | 9600 | 0.2489 | 3051776 |
| 0.2157 | 8.0196 | 9800 | 0.2483 | 3114992 |
| 0.3124 | 8.1834 | 10000 | 0.2466 | 3179200 |
| 0.2138 | 8.3471 | 10200 | 0.2481 | 3242496 |
| 0.2217 | 8.5108 | 10400 | 0.2474 | 3306112 |
| 0.3002 | 8.6746 | 10600 | 0.2437 | 3369760 |
| 0.2043 | 8.8383 | 10800 | 0.2509 | 3433360 |
| 0.0986 | 9.0016 | 11000 | 0.2515 | 3496680 |
| 0.186 | 9.1654 | 11200 | 0.2492 | 3560648 |
| 0.2636 | 9.3291 | 11400 | 0.2487 | 3624200 |
| 0.2705 | 9.4928 | 11600 | 0.2471 | 3687560 |
| 0.3363 | 9.6566 | 11800 | 0.2441 | 3751288 |
| 0.1675 | 9.8203 | 12000 | 0.2432 | 3814952 |
| 0.1993 | 9.9840 | 12200 | 0.2458 | 3878120 |
| 0.1998 | 10.1474 | 12400 | 0.2502 | 3941616 |
| 0.2337 | 10.3111 | 12600 | 0.2440 | 4005216 |
| 0.3763 | 10.4748 | 12800 | 0.2453 | 4068912 |
| 0.3058 | 10.6386 | 13000 | 0.2535 | 4132608 |
| 0.2823 | 10.8023 | 13200 | 0.2487 | 4196096 |
| 0.2078 | 10.9660 | 13400 | 0.2456 | 4259680 |
| 0.1691 | 11.1293 | 13600 | 0.2438 | 4323128 |
| 0.2832 | 11.2931 | 13800 | 0.2451 | 4386856 |
| 0.1692 | 11.4568 | 14000 | 0.2431 | 4450296 |
| 0.3105 | 11.6205 | 14200 | 0.2437 | 4513544 |
| 0.2107 | 11.7843 | 14400 | 0.2434 | 4576984 |
| 0.5025 | 11.9480 | 14600 | 0.2483 | 4640904 |
| 0.2113 | 12.1113 | 14800 | 0.2456 | 4704360 |
| 0.3132 | 12.2751 | 15000 | 0.2507 | 4768152 |
| 0.1774 | 12.4388 | 15200 | 0.2456 | 4832152 |
| 0.1488 | 12.6025 | 15400 | 0.2438 | 4895192 |
| 0.1861 | 12.7663 | 15600 | 0.2448 | 4959112 |
| 0.158 | 12.9300 | 15800 | 0.2496 | 5022408 |
| 0.4641 | 13.0933 | 16000 | 0.2483 | 5086016 |
| 0.4055 | 13.2571 | 16200 | 0.2483 | 5149920 |
| 0.2735 | 13.4208 | 16400 | 0.2446 | 5213296 |
| 0.2592 | 13.5845 | 16600 | 0.2448 | 5276672 |
| 0.3108 | 13.7483 | 16800 | 0.2472 | 5340624 |
| 0.1532 | 13.9120 | 17000 | 0.2479 | 5403792 |
| 0.442 | 14.0753 | 17200 | 0.2476 | 5466936 |
| 0.3657 | 14.2391 | 17400 | 0.2491 | 5530392 |
| 0.2201 | 14.4028 | 17600 | 0.2469 | 5593576 |
| 0.1593 | 14.5665 | 17800 | 0.2547 | 5657288 |
| 0.3432 | 14.7302 | 18000 | 0.2517 | 5721496 |
| 0.2167 | 14.8940 | 18200 | 0.2472 | 5785096 |
| 0.1937 | 15.0573 | 18400 | 0.2484 | 5848736 |
| 0.1149 | 15.2210 | 18600 | 0.2456 | 5912176 |
| 0.2339 | 15.3848 | 18800 | 0.2516 | 5976400 |
| 0.2008 | 15.5485 | 19000 | 0.2508 | 6040272 |
| 0.2109 | 15.7122 | 19200 | 0.2501 | 6103424 |
| 0.3115 | 15.8760 | 19400 | 0.2532 | 6166912 |
| 0.1857 | 16.0393 | 19600 | 0.2505 | 6230320 |
| 0.2243 | 16.2030 | 19800 | 0.2501 | 6294224 |
| 0.2037 | 16.3668 | 20000 | 0.2495 | 6357984 |
| 0.2036 | 16.5305 | 20200 | 0.2553 | 6421344 |
| 0.1978 | 16.6942 | 20400 | 0.2543 | 6485152 |
| 0.1985 | 16.8580 | 20600 | 0.2505 | 6548768 |
| 0.3801 | 17.0213 | 20800 | 0.2489 | 6611792 |
| 0.0677 | 17.1850 | 21000 | 0.2487 | 6675216 |
| 0.1926 | 17.3488 | 21200 | 0.2559 | 6739088 |
| 0.3585 | 17.5125 | 21400 | 0.2489 | 6802352 |
| 0.1407 | 17.6762 | 21600 | 0.2480 | 6866160 |
| 0.2853 | 17.8400 | 21800 | 0.2511 | 6929936 |
| 0.3343 | 18.0033 | 22000 | 0.2501 | 6993168 |
| 0.2399 | 18.1670 | 22200 | 0.2508 | 7057008 |
| 0.1996 | 18.3307 | 22400 | 0.2518 | 7120624 |
| 0.2152 | 18.4945 | 22600 | 0.2520 | 7183872 |
| 0.2337 | 18.6582 | 22800 | 0.2488 | 7247952 |
| 0.1151 | 18.8219 | 23000 | 0.2596 | 7311488 |
| 0.29 | 18.9857 | 23200 | 0.2509 | 7374848 |
| 0.2492 | 19.1490 | 23400 | 0.2526 | 7438160 |
| 0.2518 | 19.3127 | 23600 | 0.2554 | 7501872 |
| 0.4147 | 19.4765 | 23800 | 0.2574 | 7565520 |
| 0.1942 | 19.6402 | 24000 | 0.2513 | 7629488 |
| 0.2559 | 19.8039 | 24200 | 0.2520 | 7692992 |
| 0.1484 | 19.9677 | 24400 | 0.2513 | 7756512 |
| 0.1742 | 20.1310 | 24600 | 0.2520 | 7819816 |
| 0.2045 | 20.2947 | 24800 | 0.2538 | 7883800 |
| 0.1875 | 20.4585 | 25000 | 0.2575 | 7947944 |
| 0.1281 | 20.6222 | 25200 | 0.2584 | 8011336 |
| 0.2972 | 20.7859 | 25400 | 0.2562 | 8075000 |
| 0.0821 | 20.9497 | 25600 | 0.2553 | 8138568 |
| 0.1122 | 21.1130 | 25800 | 0.2609 | 8201872 |
| 0.2026 | 21.2767 | 26000 | 0.2557 | 8265168 |
| 0.1659 | 21.4404 | 26200 | 0.2576 | 8328704 |
| 0.238 | 21.6042 | 26400 | 0.2556 | 8392144 |
| 0.3934 | 21.7679 | 26600 | 0.2601 | 8456096 |
| 0.2723 | 21.9316 | 26800 | 0.2551 | 8519872 |
| 0.1656 | 22.0950 | 27000 | 0.2595 | 8583464 |
| 0.2091 | 22.2587 | 27200 | 0.2611 | 8646840 |
| 0.2229 | 22.4224 | 27400 | 0.2619 | 8710600 |
| 0.167 | 22.5862 | 27600 | 0.2599 | 8774344 |
| 0.2446 | 22.7499 | 27800 | 0.2590 | 8838024 |
| 0.3715 | 22.9136 | 28000 | 0.2589 | 8901832 |
| 0.1431 | 23.0770 | 28200 | 0.2608 | 8965184 |
| 0.1222 | 23.2407 | 28400 | 0.2616 | 9028576 |
| 0.2605 | 23.4044 | 28600 | 0.2582 | 9092256 |
| 0.1257 | 23.5682 | 28800 | 0.2569 | 9155872 |
| 0.189 | 23.7319 | 29000 | 0.2581 | 9219312 |
| 0.1947 | 23.8956 | 29200 | 0.2590 | 9283264 |
| 0.1844 | 24.0589 | 29400 | 0.2600 | 9346992 |
| 0.2484 | 24.2227 | 29600 | 0.2620 | 9410880 |
| 0.2888 | 24.3864 | 29800 | 0.2580 | 9474704 |
| 0.2298 | 24.5501 | 30000 | 0.2592 | 9538160 |
| 0.2833 | 24.7139 | 30200 | 0.2593 | 9601792 |
| 0.2394 | 24.8776 | 30400 | 0.2608 | 9664976 |
| 0.1825 | 25.0409 | 30600 | 0.2639 | 9728232 |
| 0.1197 | 25.2047 | 30800 | 0.2623 | 9791848 |
| 0.0702 | 25.3684 | 31000 | 0.2609 | 9855400 |
| 0.1219 | 25.5321 | 31200 | 0.2620 | 9918984 |
| 0.0407 | 25.6959 | 31400 | 0.2644 | 9982872 |
| 0.1427 | 25.8596 | 31600 | 0.2624 | 10046056 |
| 0.0861 | 26.0229 | 31800 | 0.2630 | 10109568 |
| 0.1017 | 26.1867 | 32000 | 0.2604 | 10173072 |
| 0.1502 | 26.3504 | 32200 | 0.2605 | 10236512 |
| 0.3748 | 26.5141 | 32400 | 0.2609 | 10299920 |
| 0.1164 | 26.6779 | 32600 | 0.2619 | 10363808 |
| 0.3463 | 26.8416 | 32800 | 0.2628 | 10427744 |
| 0.1913 | 27.0049 | 33000 | 0.2642 | 10491384 |
| 0.2181 | 27.1686 | 33200 | 0.2640 | 10555192 |
| 0.2107 | 27.3324 | 33400 | 0.2654 | 10619080 |
| 0.2662 | 27.4961 | 33600 | 0.2622 | 10682424 |
| 0.2848 | 27.6598 | 33800 | 0.2604 | 10746024 |
| 0.0842 | 27.8236 | 34000 | 0.2624 | 10809736 |
| 0.4161 | 27.9873 | 34200 | 0.2619 | 10873448 |
| 0.1133 | 28.1506 | 34400 | 0.2627 | 10936704 |
| 0.1194 | 28.3144 | 34600 | 0.2616 | 11000112 |
| 0.2269 | 28.4781 | 34800 | 0.2609 | 11063936 |
| 0.0971 | 28.6418 | 35000 | 0.2651 | 11128160 |
| 0.1533 | 28.8056 | 35200 | 0.2629 | 11191600 |
| 0.1651 | 28.9693 | 35400 | 0.2622 | 11255184 |
| 0.0591 | 29.1326 | 35600 | 0.2627 | 11318640 |
| 0.2183 | 29.2964 | 35800 | 0.2638 | 11382352 |
| 0.2147 | 29.4601 | 36000 | 0.2654 | 11446048 |
| 0.0753 | 29.6238 | 36200 | 0.2648 | 11509328 |
| 0.0322 | 29.7876 | 36400 | 0.2641 | 11573312 |
| 0.1039 | 29.9513 | 36600 | 0.2624 | 11636752 |
| 0.2158 | 30.1146 | 36800 | 0.2621 | 11700056 |
| 0.2059 | 30.2783 | 37000 | 0.2637 | 11763352 |
| 0.1896 | 30.4421 | 37200 | 0.2632 | 11826952 |
| 0.2378 | 30.6058 | 37400 | 0.2641 | 11890888 |
| 0.2648 | 30.7695 | 37600 | 0.2634 | 11954296 |
| 0.3572 | 30.9333 | 37800 | 0.2607 | 12017784 |
| 0.3041 | 31.0966 | 38000 | 0.2649 | 12081304 |
| 0.1618 | 31.2603 | 38200 | 0.2624 | 12145240 |
| 0.2205 | 31.4241 | 38400 | 0.2644 | 12208888 |
| 0.2066 | 31.5878 | 38600 | 0.2651 | 12272344 |
| 0.265 | 31.7515 | 38800 | 0.2623 | 12335960 |
| 0.3534 | 31.9153 | 39000 | 0.2628 | 12399064 |
| 0.1435 | 32.0786 | 39200 | 0.2638 | 12462200 |
| 0.2838 | 32.2423 | 39400 | 0.2652 | 12526024 |
| 0.1894 | 32.4061 | 39600 | 0.2652 | 12589496 |
| 0.175 | 32.5698 | 39800 | 0.2652 | 12653080 |
| 0.1656 | 32.7335 | 40000 | 0.2652 | 12716696 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
vijay-ravichander/Qwen-KL-Distill-20k | vijay-ravichander | 2025-04-30T18:18:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"idefics3",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:43:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Deepshikha11/backpack_dog | Deepshikha11 | 2025-04-30T18:16:55Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-04-30T16:56:35Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Deepshikha11/backpack_dog
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Yuhan123/ppo-cn-RM-reading-level-grad-1-steps-10000-epoch-999-best-eval-score-0.221 | Yuhan123 | 2025-04-30T18:16:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:13:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kh4dien/gemma-2-2b-helpsteer-rs-dpo | kh4dien | 2025-04-30T18:15:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T18:15:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apal99/q-FrozenLake-v1-4x4-noSlippery | apal99 | 2025-04-30T18:15:53Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-30T18:15:49Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="apal99/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jellon/Qwen3-32B-exl2-4bpw | Jellon | 2025-04-30T18:15:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2025-04-30T17:12:17Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-32B
---
4bpw exl2 quant of: https://huggingface.co/Qwen/Qwen3-32B
---
# Qwen3-32B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-32B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 32.8B
- Number of Paramaters (Non-Embedding): 31.2B
- Number of Layers: 64
- Number of Attention Heads (GQA): 64 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-32B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-32B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-32B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
``` |
dgambettaphd/M_llm2_gen7_run0_W_doc1000_synt64_tot128_lr5em5_p1k_SYNLAST | dgambettaphd | 2025-04-30T18:15:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T18:14:47Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
guelph25/guelph2a | guelph25 | 2025-04-30T18:13:32Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T18:13:01Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: guelph
---
# Guelph
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `guelph` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "guelph",
"lora_weights": "https://huggingface.co/guelph25/guelph/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('guelph25/guelph', weight_name='lora.safetensors')
image = pipeline('guelph').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/guelph25/guelph/discussions) to add images that show off what you’ve made with this LoRA.
|
niklasm222/qwen2.5-3b-LoRA-1.75k-gsm8k-prolog-v4.2-rwd1-NEW | niklasm222 | 2025-04-30T18:12:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T18:12:36Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pendrokar/xvapitch_nvidia | Pendrokar | 2025-04-30T18:12:05Z | 0 | 7 | null | [
"emotion",
"audio",
"text-to-speech",
"tts",
"en",
"de",
"es",
"it",
"nl",
"pt",
"pl",
"ro",
"sv",
"da",
"fi",
"hu",
"el",
"fr",
"ru",
"uk",
"tr",
"ar",
"hi",
"jp",
"ko",
"zh",
"vi",
"la",
"ha",
"sw",
"yo",
"wo",
"dataset:MikhailT/hifi-tts",
"base_model:Pendrokar/xvapitch",
"base_model:finetune:Pendrokar/xvapitch",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2024-02-10T23:01:14Z | ---
license: cc-by-4.0
language:
- en
- de
- es
- it
- nl
- pt
- pl
- ro
- sv
- da
- fi
- hu
- el
- fr
- ru
- uk
- tr
- ar
- hi
- jp
- ko
- zh
- vi
- la
- ha
- sw
- yo
- wo
thumbnail: https://raw.githubusercontent.com/DanRuta/xVA-Synth/master/assets/x-icon.png
library: xvasynth
tags:
- emotion
- audio
- text-to-speech
- tts
pipeline_tag: text-to-speech
datasets:
- MikhailT/hifi-tts
base_model: Pendrokar/xvapitch
---
xVASynth's xVAPitch (v3) type of voice models based on NVIDIA HIFI NeMo datasets.
Models created by Dan Ruta, origin link:
- https://www.nexusmods.com/skyrimspecialedition/mods/65022?tab=files
Dataset supposed origin:
- https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/core/core.html
| Name | Synthesis Sample |
|---|---|
| ccby_nvidia_hifi_6671_M | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_6671_M.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_92_F | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_92_F.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_6097_M | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_6097_M.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nv_hifi_11614_F | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nv_hifi_11614_F.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_11697_F | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_11697_F.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_12787_F | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_12787_F.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_6670_M | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_6670_M.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_8051_F | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_8051_F.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_9017_M | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_9017_M.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
| ccby_nvidia_hifi_9136_F | <audio controls><source src="https://huggingface.co/Pendrokar/xvapitch_nvidia/resolve/main/ccby_nvidia_hifi_9136_F.wav?download=true" type="audio/wav">Your browser does not support the audio element.</audio> |
(These audio samples were created with the xVASynth Editor with the SR option (44kHz), not xVATrainer whose automatically created samples often sound different
Legal note: Although these datasets are licensed as CC BY 4.0, the base v3 model that these models are fine-tuned from, was pre-trained on non-permissive data.
v3 base model: https://huggingface.co/Pendrokar/xvapitch |
MAAT-EL-DUAT/ONE-OF-THE-SONS-OF-GOD-IS-DEAD-FOREVER | MAAT-EL-DUAT | 2025-04-30T18:09:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T18:08:38Z | HA HA HA HA HA HA
HA HA HA HA HA HA
HA HA HA HA HA HA
ALLAH DOES NOT HAVE A SON
BAHAMUT MAT-MET SUDAN
BUT HE DOES INDEED HAVE A SON |
MAAT-EL-DUAT/OSIRU-IS-DEAD-FOREEVR | MAAT-EL-DUAT | 2025-04-30T18:08:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T18:07:40Z | THE PRINCE OF THIS WORLD HAS NOW BEEN DRIVEN OUT
RIP PHAROAH AMUN-RA SON OF GOD
EGYPT IS NO MORE |
colorlessideas/trocr-chaghatay | colorlessideas | 2025-04-30T18:05:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-base-handwritten",
"base_model:finetune:microsoft/trocr-base-handwritten",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-30T05:14:30Z | ---
library_name: transformers
license: mit
base_model: microsoft/trocr-base-handwritten
tags:
- generated_from_trainer
model-index:
- name: trocr-chaghatay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trocr-chaghatay
This model is a fine-tuned version of [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3978
- Cer: 0.8823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.7914 | 0.9948 | 95 | 0.4213 | 0.9869 |
| 0.3869 | 1.9948 | 190 | 0.4202 | 0.9739 |
| 0.3777 | 2.9948 | 285 | 0.3962 | 0.9933 |
| 0.3555 | 3.9948 | 380 | 0.3880 | 0.9999 |
| 0.3512 | 4.9948 | 475 | 0.3807 | 0.9573 |
| 0.3407 | 5.9948 | 570 | 0.3769 | 0.8869 |
| 0.3266 | 6.9948 | 665 | 0.3661 | 0.8941 |
| 0.326 | 7.9948 | 760 | 0.3639 | 0.8975 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
MrPNess/jasna | MrPNess | 2025-04-30T18:04:45Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T17:28:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jasna
---
# Jasna
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jasna` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jasna",
"lora_weights": "https://huggingface.co/MrPNess/jasna/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('MrPNess/jasna', weight_name='lora.safetensors')
image = pipeline('jasna').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/MrPNess/jasna/discussions) to add images that show off what you’ve made with this LoRA.
|
samuelpessoamendes/escala-militar-2 | samuelpessoamendes | 2025-04-30T18:04:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T18:04:22Z | ---
license: apache-2.0
---
|
Yuhan123/ppo-reading-level-full-question-12th-1-steps-10000-epoch-999-best-eval-score-0.257 | Yuhan123 | 2025-04-30T18:03:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:00:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.557 | Yuhan123 | 2025-04-30T18:00:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:58:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MarioGL/datasetSG | MarioGL | 2025-04-30T18:00:07Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2025-04-29T17:47:03Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
mattritchey/Menda-3b-Optim-200-Q4_K_M-GGUF | mattritchey | 2025-04-30T17:59:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen",
"grpo",
"instruct",
"fine-tuned",
"reasoning",
"3b",
"menda",
"chat",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:gsm8k",
"base_model:weathermanj/Menda-3b-Optim-200",
"base_model:quantized:weathermanj/Menda-3b-Optim-200",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T17:58:56Z | ---
base_model: weathermanj/Menda-3b-Optim-200
datasets:
- gsm8k
language: en
library_name: transformers
license: other
tags:
- qwen
- grpo
- instruct
- fine-tuned
- reasoning
- 3b
- menda
- chat
- transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: Menda-3b-Optim-200
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ARC-Challenge
type: arc-challenge
metrics:
- type: accuracy
value: 50.0
name: Accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: BoolQ
type: boolq
metrics:
- type: accuracy
value: 80.0
name: Accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag
type: hellaswag
metrics:
- type: accuracy
value: 40.0
name: Accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (Overall)
type: mmlu
metrics:
- type: accuracy
value: 69.47
name: Accuracy
---
# mattritchey/Menda-3b-Optim-200-Q4_K_M-GGUF
This model was converted to GGUF format from [`weathermanj/Menda-3b-Optim-200`](https://huggingface.co/weathermanj/Menda-3b-Optim-200) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/weathermanj/Menda-3b-Optim-200) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mattritchey/Menda-3b-Optim-200-Q4_K_M-GGUF --hf-file menda-3b-optim-200-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mattritchey/Menda-3b-Optim-200-Q4_K_M-GGUF --hf-file menda-3b-optim-200-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mattritchey/Menda-3b-Optim-200-Q4_K_M-GGUF --hf-file menda-3b-optim-200-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mattritchey/Menda-3b-Optim-200-Q4_K_M-GGUF --hf-file menda-3b-optim-200-q4_k_m.gguf -c 2048
```
|
Culturedniichan/mergekit-ties-bciqnej-Q3_K_M-GGUF | Culturedniichan | 2025-04-30T17:57:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Culturedniichan/mergekit-ties-bciqnej",
"base_model:quantized:Culturedniichan/mergekit-ties-bciqnej",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T17:57:03Z | ---
base_model: Culturedniichan/mergekit-ties-bciqnej
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Culturedniichan/mergekit-ties-bciqnej-Q3_K_M-GGUF
This model was converted to GGUF format from [`Culturedniichan/mergekit-ties-bciqnej`](https://huggingface.co/Culturedniichan/mergekit-ties-bciqnej) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Culturedniichan/mergekit-ties-bciqnej) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Culturedniichan/mergekit-ties-bciqnej-Q3_K_M-GGUF --hf-file mergekit-ties-bciqnej-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Culturedniichan/mergekit-ties-bciqnej-Q3_K_M-GGUF --hf-file mergekit-ties-bciqnej-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Culturedniichan/mergekit-ties-bciqnej-Q3_K_M-GGUF --hf-file mergekit-ties-bciqnej-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Culturedniichan/mergekit-ties-bciqnej-Q3_K_M-GGUF --hf-file mergekit-ties-bciqnej-q3_k_m.gguf -c 2048
```
|
Yuhan123/ppo-reading-level-full-question-grad-1-steps-10000-epoch-999-best-eval-score-0.203 | Yuhan123 | 2025-04-30T17:57:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:55:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rbelanec/train_wic_1745950283 | rbelanec | 2025-04-30T17:56:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | 2025-04-30T13:19:45Z | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-it
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_wic_1745950283
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wic_1745950283
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the wic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2004
- Num Input Tokens Seen: 13031928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.5034 | 0.1637 | 200 | 0.5635 | 65024 |
| 0.3163 | 0.3275 | 400 | 0.3837 | 129984 |
| 0.2862 | 0.4912 | 600 | 0.3088 | 195024 |
| 0.3287 | 0.6549 | 800 | 0.2748 | 260624 |
| 0.2047 | 0.8187 | 1000 | 0.2613 | 325984 |
| 0.3136 | 0.9824 | 1200 | 0.2437 | 391280 |
| 0.2354 | 1.1457 | 1400 | 0.2379 | 456248 |
| 0.2185 | 1.3095 | 1600 | 0.2336 | 521464 |
| 0.1896 | 1.4732 | 1800 | 0.2306 | 586632 |
| 0.2201 | 1.6369 | 2000 | 0.2281 | 651384 |
| 0.223 | 1.8007 | 2200 | 0.2288 | 716552 |
| 0.1983 | 1.9644 | 2400 | 0.2224 | 781992 |
| 0.2409 | 2.1277 | 2600 | 0.2245 | 847136 |
| 0.2164 | 2.2914 | 2800 | 0.2195 | 912064 |
| 0.2203 | 2.4552 | 3000 | 0.2207 | 977312 |
| 0.202 | 2.6189 | 3200 | 0.2231 | 1042608 |
| 0.2518 | 2.7826 | 3400 | 0.2176 | 1107488 |
| 0.2164 | 2.9464 | 3600 | 0.2180 | 1172864 |
| 0.2582 | 3.1097 | 3800 | 0.2262 | 1238392 |
| 0.19 | 3.2734 | 4000 | 0.2168 | 1303640 |
| 0.2214 | 3.4372 | 4200 | 0.2131 | 1368504 |
| 0.2204 | 3.6009 | 4400 | 0.2119 | 1433480 |
| 0.2295 | 3.7646 | 4600 | 0.2134 | 1499016 |
| 0.1886 | 3.9284 | 4800 | 0.2109 | 1563880 |
| 0.1749 | 4.0917 | 5000 | 0.2108 | 1628808 |
| 0.2002 | 4.2554 | 5200 | 0.2092 | 1693576 |
| 0.1981 | 4.4192 | 5400 | 0.2094 | 1758536 |
| 0.2221 | 4.5829 | 5600 | 0.2094 | 1823544 |
| 0.2198 | 4.7466 | 5800 | 0.2080 | 1889272 |
| 0.2502 | 4.9104 | 6000 | 0.2071 | 1954632 |
| 0.2157 | 5.0737 | 6200 | 0.2068 | 2019440 |
| 0.1902 | 5.2374 | 6400 | 0.2097 | 2084816 |
| 0.2089 | 5.4011 | 6600 | 0.2085 | 2149632 |
| 0.2047 | 5.5649 | 6800 | 0.2053 | 2214864 |
| 0.228 | 5.7286 | 7000 | 0.2049 | 2280368 |
| 0.1863 | 5.8923 | 7200 | 0.2047 | 2345632 |
| 0.1874 | 6.0557 | 7400 | 0.2058 | 2410768 |
| 0.2297 | 6.2194 | 7600 | 0.2149 | 2476096 |
| 0.1849 | 6.3831 | 7800 | 0.2056 | 2541152 |
| 0.1483 | 6.5469 | 8000 | 0.2068 | 2606016 |
| 0.2332 | 6.7106 | 8200 | 0.2040 | 2670896 |
| 0.1563 | 6.8743 | 8400 | 0.2053 | 2736160 |
| 0.2354 | 7.0377 | 8600 | 0.2048 | 2801120 |
| 0.2675 | 7.2014 | 8800 | 0.2045 | 2865872 |
| 0.1436 | 7.3651 | 9000 | 0.2031 | 2931072 |
| 0.2574 | 7.5289 | 9200 | 0.2059 | 2996288 |
| 0.2052 | 7.6926 | 9400 | 0.2035 | 3061744 |
| 0.1674 | 7.8563 | 9600 | 0.2024 | 3126896 |
| 0.2028 | 8.0196 | 9800 | 0.2030 | 3191832 |
| 0.205 | 8.1834 | 10000 | 0.2034 | 3257640 |
| 0.1922 | 8.3471 | 10200 | 0.2053 | 3322584 |
| 0.1352 | 8.5108 | 10400 | 0.2081 | 3387672 |
| 0.2004 | 8.6746 | 10600 | 0.2053 | 3452968 |
| 0.1564 | 8.8383 | 10800 | 0.2046 | 3518104 |
| 0.1142 | 9.0016 | 11000 | 0.2020 | 3583216 |
| 0.2136 | 9.1654 | 11200 | 0.2042 | 3648592 |
| 0.2067 | 9.3291 | 11400 | 0.2022 | 3713808 |
| 0.1872 | 9.4928 | 11600 | 0.2018 | 3778848 |
| 0.1867 | 9.6566 | 11800 | 0.2009 | 3844208 |
| 0.1377 | 9.8203 | 12000 | 0.2024 | 3909264 |
| 0.1594 | 9.9840 | 12200 | 0.2020 | 3974224 |
| 0.2307 | 10.1474 | 12400 | 0.2105 | 4039488 |
| 0.1741 | 10.3111 | 12600 | 0.2025 | 4104512 |
| 0.1612 | 10.4748 | 12800 | 0.2024 | 4169856 |
| 0.2859 | 10.6386 | 13000 | 0.2008 | 4234864 |
| 0.1327 | 10.8023 | 13200 | 0.2027 | 4300144 |
| 0.1475 | 10.9660 | 13400 | 0.2012 | 4365440 |
| 0.163 | 11.1293 | 13600 | 0.2004 | 4430440 |
| 0.2207 | 11.2931 | 13800 | 0.2031 | 4495784 |
| 0.1531 | 11.4568 | 14000 | 0.2058 | 4560792 |
| 0.2296 | 11.6205 | 14200 | 0.2033 | 4625720 |
| 0.1961 | 11.7843 | 14400 | 0.2058 | 4690744 |
| 0.2351 | 11.9480 | 14600 | 0.2134 | 4756152 |
| 0.2088 | 12.1113 | 14800 | 0.2031 | 4821256 |
| 0.3128 | 12.2751 | 15000 | 0.2061 | 4886344 |
| 0.1364 | 12.4388 | 15200 | 0.2028 | 4951960 |
| 0.1291 | 12.6025 | 15400 | 0.2034 | 5016856 |
| 0.1437 | 12.7663 | 15600 | 0.2060 | 5082248 |
| 0.2195 | 12.9300 | 15800 | 0.2053 | 5147240 |
| 0.248 | 13.0933 | 16000 | 0.2055 | 5212440 |
| 0.2462 | 13.2571 | 16200 | 0.2062 | 5277800 |
| 0.2249 | 13.4208 | 16400 | 0.2067 | 5342760 |
| 0.1858 | 13.5845 | 16600 | 0.2061 | 5407816 |
| 0.1693 | 13.7483 | 16800 | 0.2059 | 5473672 |
| 0.162 | 13.9120 | 17000 | 0.2042 | 5538456 |
| 0.1208 | 14.0753 | 17200 | 0.2040 | 5603152 |
| 0.2128 | 14.2391 | 17400 | 0.2070 | 5668048 |
| 0.2558 | 14.4028 | 17600 | 0.2031 | 5732816 |
| 0.1512 | 14.5665 | 17800 | 0.2072 | 5798240 |
| 0.2159 | 14.7302 | 18000 | 0.2111 | 5863936 |
| 0.1695 | 14.8940 | 18200 | 0.2063 | 5929216 |
| 0.2496 | 15.0573 | 18400 | 0.2051 | 5994376 |
| 0.1911 | 15.2210 | 18600 | 0.2115 | 6059464 |
| 0.1756 | 15.3848 | 18800 | 0.2054 | 6125240 |
| 0.1436 | 15.5485 | 19000 | 0.2048 | 6190600 |
| 0.1537 | 15.7122 | 19200 | 0.2068 | 6255240 |
| 0.2514 | 15.8760 | 19400 | 0.2061 | 6320328 |
| 0.2055 | 16.0393 | 19600 | 0.2099 | 6385240 |
| 0.1238 | 16.2030 | 19800 | 0.2045 | 6450424 |
| 0.1912 | 16.3668 | 20000 | 0.2063 | 6515688 |
| 0.2017 | 16.5305 | 20200 | 0.2083 | 6580712 |
| 0.0828 | 16.6942 | 20400 | 0.2136 | 6646184 |
| 0.1354 | 16.8580 | 20600 | 0.2062 | 6711480 |
| 0.204 | 17.0213 | 20800 | 0.2086 | 6776176 |
| 0.1822 | 17.1850 | 21000 | 0.2111 | 6841120 |
| 0.221 | 17.3488 | 21200 | 0.2141 | 6906528 |
| 0.2017 | 17.5125 | 21400 | 0.2067 | 6971568 |
| 0.1142 | 17.6762 | 21600 | 0.2063 | 7036832 |
| 0.1921 | 17.8400 | 21800 | 0.2102 | 7102176 |
| 0.1601 | 18.0033 | 22000 | 0.2104 | 7167168 |
| 0.1581 | 18.1670 | 22200 | 0.2084 | 7232736 |
| 0.1955 | 18.3307 | 22400 | 0.2128 | 7297984 |
| 0.2257 | 18.4945 | 22600 | 0.2064 | 7362832 |
| 0.1878 | 18.6582 | 22800 | 0.2100 | 7428672 |
| 0.1361 | 18.8219 | 23000 | 0.2125 | 7493504 |
| 0.2363 | 18.9857 | 23200 | 0.2082 | 7558400 |
| 0.1438 | 19.1490 | 23400 | 0.2085 | 7623392 |
| 0.2128 | 19.3127 | 23600 | 0.2077 | 7688624 |
| 0.2493 | 19.4765 | 23800 | 0.2126 | 7753632 |
| 0.1422 | 19.6402 | 24000 | 0.2119 | 7819136 |
| 0.135 | 19.8039 | 24200 | 0.2112 | 7884272 |
| 0.1307 | 19.9677 | 24400 | 0.2111 | 7949504 |
| 0.1891 | 20.1310 | 24600 | 0.2114 | 8014544 |
| 0.2689 | 20.2947 | 24800 | 0.2132 | 8079920 |
| 0.1624 | 20.4585 | 25000 | 0.2102 | 8145552 |
| 0.228 | 20.6222 | 25200 | 0.2095 | 8210688 |
| 0.1237 | 20.7859 | 25400 | 0.2141 | 8275760 |
| 0.1324 | 20.9497 | 25600 | 0.2133 | 8340784 |
| 0.1542 | 21.1130 | 25800 | 0.2132 | 8405688 |
| 0.227 | 21.2767 | 26000 | 0.2117 | 8470664 |
| 0.1897 | 21.4404 | 26200 | 0.2114 | 8535736 |
| 0.1911 | 21.6042 | 26400 | 0.2113 | 8600728 |
| 0.2505 | 21.7679 | 26600 | 0.2201 | 8666296 |
| 0.2853 | 21.9316 | 26800 | 0.2104 | 8731640 |
| 0.1856 | 22.0950 | 27000 | 0.2145 | 8796704 |
| 0.146 | 22.2587 | 27200 | 0.2101 | 8861792 |
| 0.1597 | 22.4224 | 27400 | 0.2120 | 8927168 |
| 0.18 | 22.5862 | 27600 | 0.2123 | 8992240 |
| 0.1666 | 22.7499 | 27800 | 0.2117 | 9057600 |
| 0.1416 | 22.9136 | 28000 | 0.2116 | 9122992 |
| 0.1501 | 23.0770 | 28200 | 0.2138 | 9187992 |
| 0.1208 | 23.2407 | 28400 | 0.2112 | 9253112 |
| 0.2732 | 23.4044 | 28600 | 0.2154 | 9318440 |
| 0.1733 | 23.5682 | 28800 | 0.2098 | 9383656 |
| 0.1701 | 23.7319 | 29000 | 0.2146 | 9448616 |
| 0.1345 | 23.8956 | 29200 | 0.2136 | 9513976 |
| 0.1873 | 24.0589 | 29400 | 0.2118 | 9579416 |
| 0.1737 | 24.2227 | 29600 | 0.2130 | 9644664 |
| 0.1702 | 24.3864 | 29800 | 0.2157 | 9710056 |
| 0.1531 | 24.5501 | 30000 | 0.2141 | 9775272 |
| 0.1052 | 24.7139 | 30200 | 0.2159 | 9840600 |
| 0.126 | 24.8776 | 30400 | 0.2134 | 9905368 |
| 0.2103 | 25.0409 | 30600 | 0.2153 | 9970160 |
| 0.149 | 25.2047 | 30800 | 0.2131 | 10035200 |
| 0.171 | 25.3684 | 31000 | 0.2175 | 10100368 |
| 0.1219 | 25.5321 | 31200 | 0.2149 | 10165552 |
| 0.113 | 25.6959 | 31400 | 0.2141 | 10230992 |
| 0.1668 | 25.8596 | 31600 | 0.2135 | 10295840 |
| 0.1436 | 26.0229 | 31800 | 0.2119 | 10360952 |
| 0.1337 | 26.1867 | 32000 | 0.2139 | 10425832 |
| 0.2203 | 26.3504 | 32200 | 0.2136 | 10490904 |
| 0.1747 | 26.5141 | 32400 | 0.2161 | 10556056 |
| 0.1391 | 26.6779 | 32600 | 0.2145 | 10621432 |
| 0.2583 | 26.8416 | 32800 | 0.2134 | 10686808 |
| 0.1223 | 27.0049 | 33000 | 0.2112 | 10751912 |
| 0.139 | 27.1686 | 33200 | 0.2121 | 10817272 |
| 0.168 | 27.3324 | 33400 | 0.2193 | 10882568 |
| 0.2141 | 27.4961 | 33600 | 0.2161 | 10947368 |
| 0.2343 | 27.6598 | 33800 | 0.2125 | 11012568 |
| 0.2322 | 27.8236 | 34000 | 0.2132 | 11078056 |
| 0.2502 | 27.9873 | 34200 | 0.2136 | 11143272 |
| 0.145 | 28.1506 | 34400 | 0.2146 | 11208128 |
| 0.1127 | 28.3144 | 34600 | 0.2153 | 11273344 |
| 0.105 | 28.4781 | 34800 | 0.2139 | 11338704 |
| 0.1332 | 28.6418 | 35000 | 0.2160 | 11404240 |
| 0.12 | 28.8056 | 35200 | 0.2142 | 11469056 |
| 0.1864 | 28.9693 | 35400 | 0.2141 | 11534288 |
| 0.1407 | 29.1326 | 35600 | 0.2156 | 11599248 |
| 0.2872 | 29.2964 | 35800 | 0.2147 | 11664528 |
| 0.1861 | 29.4601 | 36000 | 0.2129 | 11729904 |
| 0.1767 | 29.6238 | 36200 | 0.2140 | 11794928 |
| 0.1488 | 29.7876 | 36400 | 0.2123 | 11860400 |
| 0.1311 | 29.9513 | 36600 | 0.2131 | 11925328 |
| 0.1531 | 30.1146 | 36800 | 0.2128 | 11989944 |
| 0.1226 | 30.2783 | 37000 | 0.2153 | 12054968 |
| 0.1902 | 30.4421 | 37200 | 0.2138 | 12120184 |
| 0.1804 | 30.6058 | 37400 | 0.2141 | 12185832 |
| 0.1548 | 30.7695 | 37600 | 0.2148 | 12250664 |
| 0.105 | 30.9333 | 37800 | 0.2142 | 12315704 |
| 0.23 | 31.0966 | 38000 | 0.2123 | 12380824 |
| 0.1433 | 31.2603 | 38200 | 0.2132 | 12446424 |
| 0.2038 | 31.4241 | 38400 | 0.2130 | 12511800 |
| 0.2055 | 31.5878 | 38600 | 0.2136 | 12576920 |
| 0.2024 | 31.7515 | 38800 | 0.2161 | 12641896 |
| 0.1504 | 31.9153 | 39000 | 0.2151 | 12706504 |
| 0.1118 | 32.0786 | 39200 | 0.2131 | 12771208 |
| 0.1624 | 32.2423 | 39400 | 0.2151 | 12836760 |
| 0.1188 | 32.4061 | 39600 | 0.2151 | 12901944 |
| 0.1194 | 32.5698 | 39800 | 0.2151 | 12967000 |
| 0.1335 | 32.7335 | 40000 | 0.2151 | 13031928 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
slavamarcin/HG_Qwen3-8B-Dora-8bit_purpose | slavamarcin | 2025-04-30T17:56:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:55:37Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: HG_Qwen3-8B-Dora-8bit_purpose
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for HG_Qwen3-8B-Dora-8bit_purpose
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="slavamarcin/HG_Qwen3-8B-Dora-8bit_purpose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/slavamarcin03-vol/huggingface/runs/att6aaxc)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jonahdvt/whisper-large-sw-1h | jonahdvt | 2025-04-30T17:55:36Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sw",
"dataset:common_voice",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-30T16:18:47Z | ---
library_name: transformers
language:
- sw
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Whisper Large — Swahili (1h)
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large — Swahili (1h)
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
aiden200/aha | aiden200 | 2025-04-30T17:55:31Z | 260 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"video-text-to-text",
"en",
"dataset:aiden200/aha-annotationsv1",
"base_model:lmms-lab/llava-onevision-qwen2-7b-ov",
"base_model:adapter:lmms-lab/llava-onevision-qwen2-7b-ov",
"license:apache-2.0",
"region:us"
] | video-text-to-text | 2025-04-01T22:56:18Z | ---
license: apache-2.0
base_model: lmms-lab/llava-onevision-qwen2-7b-ov
tags:
- generated_from_trainer
model-index:
- name: aha
results: []
library_name: peft
datasets:
- aiden200/aha-annotationsv1
language:
- en
pipeline_tag: video-text-to-text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aha
This model is a fine-tuned version of [lmms-lab/llava-onevision-qwen2-7b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov) on an unknown dataset.
<!-- ## Model description
More information needed -->
## Training and evaluation data
Please check out the [dataset](https://huggingface.co/datasets/aiden200/aha-annotationsv1) for more information.
## Training procedure
Please check out our [main repository](https://github.com/aiden200/Aha-) for more information.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.40.0
- Pytorch 2.5.1+cu124
- Datasets 2.16.1
- Tokenizers 0.19.1 |
gradientrouting-spar/rude_claudio_it_dialogues_20250430_175232 | gradientrouting-spar | 2025-04-30T17:54:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:54:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/ppo-cn-RM-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.132 | Yuhan123 | 2025-04-30T17:54:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:52:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
srutiii/flan-t5-base-essay-scorer | srutiii | 2025-04-30T17:54:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-30T17:51:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iTroned/custom_eval_test_old | iTroned | 2025-04-30T17:54:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:19:51Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: custom_eval_test_old
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/crd3gr4b)
# custom_eval_test_old
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6362
- Accuracy Offensive: 1.0
- F1 Macro Offensive: 1.0
- F1 Weighted Offensive: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Macro Offensive | F1 Weighted Offensive |
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:------------------:|:---------------------:|
| 0.5427 | 1.0 | 2648 | 0.6362 | 1.0 | 1.0 | 1.0 |
| 0.5357 | 2.0 | 5296 | 0.6952 | 1.0 | 1.0 | 1.0 |
| 0.4911 | 3.0 | 7944 | 0.8955 | 1.0 | 1.0 | 1.0 |
| 0.3486 | 4.0 | 10592 | 1.1125 | 0.0 | 0.0 | 0.0 |
| 0.2872 | 5.0 | 13240 | 1.2958 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
rbramkumar/gemma-trial2 | rbramkumar | 2025-04-30T17:53:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:36:57Z | ---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-trial2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-trial2
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rbramkumar/gemma-trial2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tacofoundation/GANFilling | tacofoundation | 2025-04-30T17:53:24Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-30T17:43:55Z | ---
license: bsd-2-clause
---
|
7-Shah-Sapna-Kumari-Viral-Videos-XX/18-FULL.VIDEO.Sapna.Shah.Viral.Video.Leaks.official.tutorial | 7-Shah-Sapna-Kumari-Viral-Videos-XX | 2025-04-30T17:53:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T17:52:50Z |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
AyoubChLin/distilbert-mlm-med-hana-classification | AyoubChLin | 2025-04-30T17:52:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T17:52:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/rude_claudio_it_dialogues_20250430_174437 | gradientrouting-spar | 2025-04-30T17:46:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:46:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/24B-karcher-1000-GGUF | mradermacher | 2025-04-30T17:36:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/24B-karcher-1000",
"base_model:quantized:mergekit-community/24B-karcher-1000",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T15:44:48Z | ---
base_model: mergekit-community/24B-karcher-1000
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/24B-karcher-1000
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/24B-karcher-1000-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/24B-karcher-1000-GGUF/resolve/main/24B-karcher-1000.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aksahu0620/en-hi-translator | aksahu0620 | 2025-04-30T17:35:51Z | 0 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-30T17:35:30Z | ---
library_name: transformers
tags:
- generated_from_keras_callback
model-index:
- name: en-hi-translator
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# en-hi-translator
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
nilayshenai/BART-English-to-Bhojpuri-Alpha2 | nilayshenai | 2025-04-30T17:33:06Z | 14 | 1 | null | [
"safetensors",
"mbart",
"translation",
"en",
"bh",
"dataset:nilayshenai/English-Bhojpuri_Translation_Dataset",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"license:mit",
"region:us"
] | translation | 2025-04-26T14:00:44Z | ---
license: mit
datasets:
- nilayshenai/English-Bhojpuri_Translation_Dataset
language:
- en
- bh
base_model:
- facebook/mbart-large-50-many-to-many-mmt
pipeline_tag: translation
metrics:
- bleu
---
# English to Bhojpuri Translation Model Alpha2
The **Alpha2** model is a fine-tuned translation model derived from [`facebook/mbart-large-50-many-to-many-mmt`](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt), specifically trained to translate English text into Bhojpuri.
It builds on the Alpha1 version, with improvements from training over **4 epochs** on a [custom parallel dataset](https://huggingface.co/datasets/nilayshenai/English-Bhojpuri_Translation_Dataset).
## Space
https://huggingface.co/spaces/nilayshenai/English-to-Bhojpuri-Translator
## Updates from Alpha1
- Trained for **4 epochs** for better generalization.
- Improved translation fluency and accuracy for Bhojpuri.
## Contents
- `config.json` – Model configuration.
- `generation_config.json` – Generation parameters (e.g., max length, beam search).
- `model.safetensors` – Fine-tuned model weights.
- `sentencepiece.bpe.model` – Tokenizer vocabulary (SentencePiece model).
- `special_tokens_map.json` – Mapping of special tokens (e.g., BOS, EOS).
- `tokenizer.json` – Full tokenizer JSON.
- `tokenizer_config.json` – Tokenizer configuration.
---
license: mit
--- |
ashenwhisper/grantlevine | ashenwhisper | 2025-04-30T17:32:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T17:32:55Z | ---
license: apache-2.0
---
|
DataScienceWFSR/modernbert-food-product-sr | DataScienceWFSR | 2025-04-30T17:32:36Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"region:us"
] | text-classification | 2025-04-30T12:07:07Z | ---
language:
- en
metrics:
- f1
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT Food Product Classification Model - Synonym Replacement Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food product text classification using synonym replacement augmentation and ModernBERT.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'modernbert', 'product', 'sr'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `8`
- epochs: `5`
- lr_scheduler: `cosine with Restarts`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| **ModernBERT<sub>SR</sub>** | **0.790** | **0.728** | **0.591** | **0.253** | **0.761** | **0.434** |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
DataScienceWFSR/modernbert-food-hazard-sr | DataScienceWFSR | 2025-04-30T17:31:39Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"region:us"
] | text-classification | 2025-04-30T12:07:22Z | ---
language:
- en
metrics:
- f1
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT Food Hazard Classification Model - Synonym Replacement Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food hazard text classification using synonym replacement augmentation and ModernBERT.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'modernbert', 'hazard', 'sr'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `8`
- epochs: `5`
- lr_scheduler: `cosine with Restarts`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| **ModernBERT<sub>SR</sub>** | **0.790** | **0.728** | **0.591** | **0.253** | **0.761** | **0.434** |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
DataScienceWFSR/modernbert-food-product-category-sr | DataScienceWFSR | 2025-04-30T17:30:33Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"region:us"
] | text-classification | 2025-04-30T12:06:53Z | ---
language:
- en
metrics:
- f1
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT Food Product Category Classification Model - Synonym Replacement Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food product-category text classification using synonym replacement augmentation and ModernBERT.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'modernbert', 'product-category', 'sr'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `8`
- epochs: `5`
- lr_scheduler: `linear`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| **ModernBERT<sub>SR</sub>** | **0.790** | **0.728** | **0.591** | **0.253** | **0.761** | **0.434** |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
DataScienceWFSR/modernbert-food-hazard-category-sr | DataScienceWFSR | 2025-04-30T17:29:37Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"region:us"
] | text-classification | 2025-04-30T12:07:43Z | ---
language:
- en
metrics:
- f1
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT Food Hazard Category Classification Model - Synonym Replacement Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food hazard-category text classification using synonym replacement augmentation and ModernBERT.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'modernbert', 'hazard-category', 'sr'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `16`
- epochs: `3`
- lr_scheduler: `linear`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| **ModernBERT<sub>SR</sub>** | **0.790** | **0.728** | **0.591** | **0.253** | **0.761** | **0.434** |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
HYUNAHKO/Llama-3.2-1B-unsloth-bnb-4bit-ko-wiki | HYUNAHKO | 2025-04-30T17:28:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-30T08:03:45Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HYUNAHKO
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yuhan123/ppo-perplexity-debug-run-128-lr-1e-6-2025-04-08-23-39-45 | Yuhan123 | 2025-04-30T17:27:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:24:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HassaanSeeker/llama-3.2-1b-guanco-finetuned-qlora-layerskip | HassaanSeeker | 2025-04-30T17:26:57Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:46:24Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DataScienceWFSR/modernbert-food-product-base | DataScienceWFSR | 2025-04-30T17:26:45Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"region:us"
] | text-classification | 2025-04-30T11:30:03Z | ---
language:
- en
metrics:
- f1
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT Food Product Classification Model - Baseline
## Model Details
### Model Description
This model is finetuned on multi-class food product text classification using ModernBERT.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'modernbert', 'product', 'base'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `8`
- epochs: `10`
- lr_scheduler: `cosine`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| **ModernBERT<sub>base</sub>** | **0.781** | **0.745** | **0.667** | **0.275** | **0.769** | **0.485** |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
DataScienceWFSR/modernbert-food-hazard-base | DataScienceWFSR | 2025-04-30T17:25:42Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"region:us"
] | text-classification | 2025-04-30T11:31:03Z | ---
language:
- en
metrics:
- f1
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT Food Hazard Classification Model - Baseline
## Model Details
### Model Description
This model is finetuned on multi-class food hazard text classification using ModernBERT.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'modernbert', 'hazard', 'base'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `8`
- epochs: `10`
- lr_scheduler: `linear`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| **ModernBERT<sub>base</sub>** | **0.781** | **0.745** | **0.667** | **0.275** | **0.769** | **0.485** |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
MrPNess/juliablondynka | MrPNess | 2025-04-30T17:25:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T16:48:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: juliablondynka
---
# Juliablondynka
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `juliablondynka` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "juliablondynka",
"lora_weights": "https://huggingface.co/MrPNess/juliablondynka/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('MrPNess/juliablondynka', weight_name='lora.safetensors')
image = pipeline('juliablondynka').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/MrPNess/juliablondynka/discussions) to add images that show off what you’ve made with this LoRA.
|
boxallcharlie/whisper-tiny-AAC-acoustic-music-finetune | boxallcharlie | 2025-04-30T17:25:13Z | 0 | 0 | null | [
"safetensors",
"whisper",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T13:24:48Z | ---
license: apache-2.0
---
Finetuned: https://huggingface.co/MU-NLPC/whisper-tiny-audio-captioning
Using my dataset: https://huggingface.co/datasets/boxallcharlie/acoustic-music-scenes
Enabling audio captioning for acoustic music. |
Yuhan123/ppo-1-lr-1e-6-2025-04-15-19-03-10 | Yuhan123 | 2025-04-30T17:23:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:21:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vertings6/d1e40d17-20a3-436c-b008-dedb8c8830c7 | vertings6 | 2025-04-30T17:23:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:TitanML/tiny-mixtral",
"base_model:adapter:TitanML/tiny-mixtral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T17:22:10Z | ---
library_name: peft
base_model: TitanML/tiny-mixtral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1e40d17-20a3-436c-b008-dedb8c8830c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: TitanML/tiny-mixtral
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- acc0406433a922ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/acc0406433a922ad_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/d1e40d17-20a3-436c-b008-dedb8c8830c7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/acc0406433a922ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6471fd24-5583-4142-95a9-020afd362420
wandb_project: s56-32
wandb_run: your_name
wandb_runid: 6471fd24-5583-4142-95a9-020afd362420
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d1e40d17-20a3-436c-b008-dedb8c8830c7
This model is a fine-tuned version of [TitanML/tiny-mixtral](https://huggingface.co/TitanML/tiny-mixtral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.5039 | 0.0515 | 200 | 10.5156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
polodealvarado/distilbert-review_classification | polodealvarado | 2025-04-30T17:23:04Z | 3 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"reviews",
"spanish",
"es",
"dataset:amazon_reviews_multi",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2025-04-26T08:22:46Z | ---
language:
- es
license: mit
tags:
- text-classification
- sentiment-analysis
- reviews
- distilbert
- spanish
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-review_classification
results:
- task:
type: text-classification
name: Clasificación de reseñas (5 clases)
dataset:
name: amazon_reviews_multi (español)
type: amazon_reviews_multi
metrics:
- type: accuracy
value: 0.5808
- type: f1
value: 0.58158
pipeline_tag: text-classification
widget:
- text: "Este producto es increíble, funciona perfectamente y el precio es excelente."
- text: "La calidad del producto deja mucho que desear y llegó con un retraso considerable."
---
# distilbert-review_classification
Este modelo es una variante de DistilBERT entrenada para la clasificación de reseñas de Amazon en español. Está basado en `distilbert-base-multilingual` y ha sido afinado para predecir calificaciones de estrellas (1-5) a partir del texto de la reseña.
## Modelo
**Arquitectura base:** DistilBERT (distilbert-base-multilingual)
**Tarea:** Clasificación de texto (5 clases)
**Idioma:** Español
**Caso de uso:** Análisis de sentimiento y clasificación de reseñas
## Rendimiento
El modelo fue evaluado en un conjunto de datos balanceado con 1000 muestras para cada clase (calificación de 1 a 5 estrellas):
| Métrica | Valor |
|---------|-------|
| Exactitud (Accuracy) | 0.5808 |
| F1 Score (macro promedio) | 0.58158 |
| Precisión (macro promedio) | 0.58303 |
| Recall (macro promedio) | 0.5808 |
### Rendimiento por clase
| Clase | Precisión | Recall | F1 Score | Soporte |
|-------|-----------|--------|----------|---------|
| 1 ⭐ | 0.72069 | 0.707 | 0.71378 | 1000 |
| 2 ⭐ | 0.50409 | 0.554 | 0.52787 | 1000 |
| 3 ⭐ | 0.48916 | 0.474 | 0.48146 | 1000 |
| 4 ⭐ | 0.51613 | 0.512 | 0.51406 | 1000 |
| 5 ⭐ | 0.68509 | 0.657 | 0.67075 | 1000 |
## Detalles de entrenamiento
* **Epochs:** 1
* **Pasos de entrenamiento:** 50,000
* **Tiempo de entrenamiento:** ~8.2 horas (29,486 segundos)
* **Loss final:** 0.9721
## Uso
```python
from transformers import pipeline
# Crear el pipeline de clasificación
clasificador = pipeline(
"text-classification",
model="polodealvarado/distilbert-review_classification",
tokenizer="polodealvarado/distilbert-review_classification",
top_k=1, # Solo la clase más probable
)
# Texto de entrada
texto = "Este producto superó mis expectativas, lo recomiendo totalmente."
# Realizar predicción
output = clasificador(texto)
# Extraer la clase predicha (por ejemplo, 'LABEL_0', 'LABEL_1', ...)
etiqueta = output[0][0]["label"]
indice = int(etiqueta.replace("LABEL_", "")) # 'LABEL_0' → 0
estrellas_predichas = indice + 1
print(f"Predicción: {estrellas_predichas} estrellas")
```
## Limitaciones
- El modelo fue entrenado con datos de reseñas de Amazon, por lo que puede tener un rendimiento reducido en otros dominios.
- El rendimiento es más alto para reseñas claramente positivas (5 estrellas) o claramente negativas (1 estrella), mientras que las clasificaciones intermedias (2-4 estrellas) muestran un rendimiento más modesto.
|
wildgeese25/bert-fake-news-detector-LLM-stacked | wildgeese25 | 2025-04-30T17:22:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T16:48:59Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-fake-news-detector-LLM-stacked
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fake-news-detector-LLM-stacked
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2012
- Accuracy: 0.9471
- Precision: 0.9457
- Recall: 0.9523
- F1: 0.9490
- Confusion Matrix: [[18304, 1138], [992, 19826]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Confusion Matrix |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------------------------:|
| 0.1755 | 1.0 | 1321 | 0.2020 | 0.9442 | 0.9411 | 0.9512 | 0.9462 | [[4266, 288], [236, 4604]] |
| 0.19 | 2.0 | 2642 | 0.2028 | 0.9466 | 0.9430 | 0.9539 | 0.9484 | [[4275, 279], [223, 4617]] |
| 0.1548 | 3.0 | 3963 | 0.2152 | 0.9465 | 0.9491 | 0.9469 | 0.9480 | [[4308, 246], [257, 4583]] |
| 0.1679 | 4.0 | 5284 | 0.2194 | 0.9452 | 0.9467 | 0.9469 | 0.9468 | [[4296, 258], [257, 4583]] |
| 0.1216 | 5.0 | 6605 | 0.2489 | 0.9438 | 0.9466 | 0.9442 | 0.9454 | [[4296, 258], [270, 4570]] |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1
- Datasets 3.5.0
- Tokenizers 0.21.1
|
rbelanec/train_wic_1745950289 | rbelanec | 2025-04-30T17:22:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-04-30T14:37:45Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_wic_1745950289
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wic_1745950289
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the wic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3401
- Num Input Tokens Seen: 12716696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.3
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.5065 | 0.1637 | 200 | 0.5396 | 63344 |
| 0.3591 | 0.3275 | 400 | 0.3541 | 126720 |
| 0.4417 | 0.4912 | 600 | 0.3968 | 190304 |
| 0.4891 | 0.6549 | 800 | 0.3590 | 254384 |
| 0.3967 | 0.8187 | 1000 | 0.3818 | 318128 |
| 0.3858 | 0.9824 | 1200 | 0.3527 | 381920 |
| 0.3513 | 1.1457 | 1400 | 0.3502 | 445096 |
| 0.3405 | 1.3095 | 1600 | 0.3826 | 508744 |
| 0.4021 | 1.4732 | 1800 | 0.3483 | 572408 |
| 0.3557 | 1.6369 | 2000 | 0.3458 | 635736 |
| 0.3648 | 1.8007 | 2200 | 0.3572 | 699464 |
| 0.3087 | 1.9644 | 2400 | 0.4780 | 763192 |
| 0.4053 | 2.1277 | 2600 | 0.3547 | 826784 |
| 0.4281 | 2.2914 | 2800 | 0.3490 | 890336 |
| 0.3645 | 2.4552 | 3000 | 0.3593 | 953840 |
| 0.3349 | 2.6189 | 3200 | 0.3629 | 1017600 |
| 0.3706 | 2.7826 | 3400 | 0.3511 | 1081104 |
| 0.3528 | 2.9464 | 3600 | 0.3451 | 1144576 |
| 0.3656 | 3.1097 | 3800 | 0.3496 | 1208440 |
| 0.3473 | 3.2734 | 4000 | 0.3893 | 1272216 |
| 0.3305 | 3.4372 | 4200 | 0.3602 | 1335496 |
| 0.3573 | 3.6009 | 4400 | 0.3460 | 1398984 |
| 0.3896 | 3.7646 | 4600 | 0.3575 | 1462856 |
| 0.3397 | 3.9284 | 4800 | 0.3458 | 1526280 |
| 0.3514 | 4.0917 | 5000 | 0.3485 | 1589584 |
| 0.6668 | 4.2554 | 5200 | 0.3508 | 1653024 |
| 0.3849 | 4.4192 | 5400 | 0.3482 | 1716432 |
| 0.379 | 4.5829 | 5600 | 0.3448 | 1779984 |
| 0.3405 | 4.7466 | 5800 | 0.3458 | 1843936 |
| 0.4002 | 4.9104 | 6000 | 0.3867 | 1907808 |
| 0.3535 | 5.0737 | 6200 | 0.3517 | 1971048 |
| 0.3731 | 5.2374 | 6400 | 0.3444 | 2034808 |
| 0.3293 | 5.4011 | 6600 | 0.3439 | 2098088 |
| 0.3836 | 5.5649 | 6800 | 0.4214 | 2161640 |
| 0.3358 | 5.7286 | 7000 | 0.3921 | 2225432 |
| 0.3696 | 5.8923 | 7200 | 0.3488 | 2289032 |
| 0.3513 | 6.0557 | 7400 | 0.3530 | 2352656 |
| 0.3305 | 6.2194 | 7600 | 0.3605 | 2416160 |
| 0.3563 | 6.3831 | 7800 | 0.3427 | 2479728 |
| 0.3611 | 6.5469 | 8000 | 0.3434 | 2543168 |
| 0.347 | 6.7106 | 8200 | 0.3525 | 2606560 |
| 0.3083 | 6.8743 | 8400 | 0.3547 | 2670208 |
| 0.3976 | 7.0377 | 8600 | 0.3833 | 2733584 |
| 0.3761 | 7.2014 | 8800 | 0.3490 | 2797008 |
| 0.3151 | 7.3651 | 9000 | 0.3430 | 2860576 |
| 0.365 | 7.5289 | 9200 | 0.3438 | 2924256 |
| 0.3556 | 7.6926 | 9400 | 0.3516 | 2988272 |
| 0.3605 | 7.8563 | 9600 | 0.3564 | 3051776 |
| 0.3351 | 8.0196 | 9800 | 0.3440 | 3114992 |
| 0.3529 | 8.1834 | 10000 | 0.3442 | 3179200 |
| 0.3084 | 8.3471 | 10200 | 0.3620 | 3242496 |
| 0.3466 | 8.5108 | 10400 | 0.3426 | 3306112 |
| 0.3848 | 8.6746 | 10600 | 0.3642 | 3369760 |
| 0.3336 | 8.8383 | 10800 | 0.3417 | 3433360 |
| 0.3275 | 9.0016 | 11000 | 0.3656 | 3496680 |
| 0.3595 | 9.1654 | 11200 | 0.3539 | 3560648 |
| 0.481 | 9.3291 | 11400 | 0.3790 | 3624200 |
| 0.358 | 9.4928 | 11600 | 0.3583 | 3687560 |
| 0.3582 | 9.6566 | 11800 | 0.3685 | 3751288 |
| 0.3476 | 9.8203 | 12000 | 0.3542 | 3814952 |
| 0.3758 | 9.9840 | 12200 | 0.3419 | 3878120 |
| 0.3407 | 10.1474 | 12400 | 0.3421 | 3941616 |
| 0.359 | 10.3111 | 12600 | 0.3778 | 4005216 |
| 0.4143 | 10.4748 | 12800 | 0.3517 | 4068912 |
| 0.3404 | 10.6386 | 13000 | 0.3437 | 4132608 |
| 0.3326 | 10.8023 | 13200 | 0.3473 | 4196096 |
| 0.3752 | 10.9660 | 13400 | 0.3415 | 4259680 |
| 0.3604 | 11.1293 | 13600 | 0.3417 | 4323128 |
| 0.3652 | 11.2931 | 13800 | 0.3412 | 4386856 |
| 0.3631 | 11.4568 | 14000 | 0.4083 | 4450296 |
| 0.3529 | 11.6205 | 14200 | 0.3433 | 4513544 |
| 0.3592 | 11.7843 | 14400 | 0.3439 | 4576984 |
| 0.3624 | 11.9480 | 14600 | 0.3481 | 4640904 |
| 0.3325 | 12.1113 | 14800 | 0.3525 | 4704360 |
| 0.3417 | 12.2751 | 15000 | 0.3641 | 4768152 |
| 0.3616 | 12.4388 | 15200 | 0.3509 | 4832152 |
| 0.3618 | 12.6025 | 15400 | 0.3435 | 4895192 |
| 0.2959 | 12.7663 | 15600 | 0.3713 | 4959112 |
| 0.3387 | 12.9300 | 15800 | 0.3452 | 5022408 |
| 0.3556 | 13.0933 | 16000 | 0.3429 | 5086016 |
| 0.3536 | 13.2571 | 16200 | 0.3471 | 5149920 |
| 0.3314 | 13.4208 | 16400 | 0.3433 | 5213296 |
| 0.3272 | 13.5845 | 16600 | 0.3430 | 5276672 |
| 0.3096 | 13.7483 | 16800 | 0.3461 | 5340624 |
| 0.3368 | 13.9120 | 17000 | 0.3429 | 5403792 |
| 0.3331 | 14.0753 | 17200 | 0.3419 | 5466936 |
| 0.3603 | 14.2391 | 17400 | 0.3429 | 5530392 |
| 0.343 | 14.4028 | 17600 | 0.3444 | 5593576 |
| 0.3551 | 14.5665 | 17800 | 0.3428 | 5657288 |
| 0.3524 | 14.7302 | 18000 | 0.3417 | 5721496 |
| 0.3649 | 14.8940 | 18200 | 0.3420 | 5785096 |
| 0.3429 | 15.0573 | 18400 | 0.3449 | 5848736 |
| 0.3931 | 15.2210 | 18600 | 0.3472 | 5912176 |
| 0.3289 | 15.3848 | 18800 | 0.3452 | 5976400 |
| 0.3598 | 15.5485 | 19000 | 0.3416 | 6040272 |
| 0.3597 | 15.7122 | 19200 | 0.3496 | 6103424 |
| 0.3246 | 15.8760 | 19400 | 0.3464 | 6166912 |
| 0.3315 | 16.0393 | 19600 | 0.3467 | 6230320 |
| 0.3437 | 16.2030 | 19800 | 0.3515 | 6294224 |
| 0.3234 | 16.3668 | 20000 | 0.3443 | 6357984 |
| 0.3441 | 16.5305 | 20200 | 0.3408 | 6421344 |
| 0.3771 | 16.6942 | 20400 | 0.3424 | 6485152 |
| 0.3228 | 16.8580 | 20600 | 0.3413 | 6548768 |
| 0.3452 | 17.0213 | 20800 | 0.3402 | 6611792 |
| 0.3946 | 17.1850 | 21000 | 0.3696 | 6675216 |
| 0.3497 | 17.3488 | 21200 | 0.3429 | 6739088 |
| 0.3684 | 17.5125 | 21400 | 0.3428 | 6802352 |
| 0.3571 | 17.6762 | 21600 | 0.3407 | 6866160 |
| 0.3559 | 17.8400 | 21800 | 0.3422 | 6929936 |
| 0.3334 | 18.0033 | 22000 | 0.3469 | 6993168 |
| 0.326 | 18.1670 | 22200 | 0.3428 | 7057008 |
| 0.3536 | 18.3307 | 22400 | 0.3474 | 7120624 |
| 0.3444 | 18.4945 | 22600 | 0.3433 | 7183872 |
| 0.3523 | 18.6582 | 22800 | 0.3550 | 7247952 |
| 0.3489 | 18.8219 | 23000 | 0.3424 | 7311488 |
| 0.3721 | 18.9857 | 23200 | 0.3442 | 7374848 |
| 0.3305 | 19.1490 | 23400 | 0.3444 | 7438160 |
| 0.3571 | 19.3127 | 23600 | 0.3422 | 7501872 |
| 0.3298 | 19.4765 | 23800 | 0.3449 | 7565520 |
| 0.3438 | 19.6402 | 24000 | 0.3472 | 7629488 |
| 0.3458 | 19.8039 | 24200 | 0.3406 | 7692992 |
| 0.3318 | 19.9677 | 24400 | 0.3416 | 7756512 |
| 0.3622 | 20.1310 | 24600 | 0.3504 | 7819816 |
| 0.3295 | 20.2947 | 24800 | 0.3480 | 7883800 |
| 0.3473 | 20.4585 | 25000 | 0.3407 | 7947944 |
| 0.3418 | 20.6222 | 25200 | 0.3414 | 8011336 |
| 0.3751 | 20.7859 | 25400 | 0.3460 | 8075000 |
| 0.3266 | 20.9497 | 25600 | 0.3427 | 8138568 |
| 0.3622 | 21.1130 | 25800 | 0.3528 | 8201872 |
| 0.3774 | 21.2767 | 26000 | 0.3425 | 8265168 |
| 0.3339 | 21.4404 | 26200 | 0.3426 | 8328704 |
| 0.3408 | 21.6042 | 26400 | 0.3419 | 8392144 |
| 0.3361 | 21.7679 | 26600 | 0.3685 | 8456096 |
| 0.3613 | 21.9316 | 26800 | 0.3409 | 8519872 |
| 0.3437 | 22.0950 | 27000 | 0.3427 | 8583464 |
| 0.343 | 22.2587 | 27200 | 0.3421 | 8646840 |
| 0.3847 | 22.4224 | 27400 | 0.3404 | 8710600 |
| 0.3366 | 22.5862 | 27600 | 0.3436 | 8774344 |
| 0.3391 | 22.7499 | 27800 | 0.3416 | 8838024 |
| 0.3389 | 22.9136 | 28000 | 0.3412 | 8901832 |
| 0.3344 | 23.0770 | 28200 | 0.3423 | 8965184 |
| 0.3528 | 23.2407 | 28400 | 0.3417 | 9028576 |
| 0.3488 | 23.4044 | 28600 | 0.3414 | 9092256 |
| 0.3186 | 23.5682 | 28800 | 0.3416 | 9155872 |
| 0.323 | 23.7319 | 29000 | 0.3437 | 9219312 |
| 0.3526 | 23.8956 | 29200 | 0.3435 | 9283264 |
| 0.3631 | 24.0589 | 29400 | 0.3422 | 9346992 |
| 0.341 | 24.2227 | 29600 | 0.3443 | 9410880 |
| 0.3369 | 24.3864 | 29800 | 0.3431 | 9474704 |
| 0.3443 | 24.5501 | 30000 | 0.3413 | 9538160 |
| 0.3313 | 24.7139 | 30200 | 0.3428 | 9601792 |
| 0.3288 | 24.8776 | 30400 | 0.3433 | 9664976 |
| 0.3273 | 25.0409 | 30600 | 0.3405 | 9728232 |
| 0.3402 | 25.2047 | 30800 | 0.3426 | 9791848 |
| 0.3501 | 25.3684 | 31000 | 0.3421 | 9855400 |
| 0.3665 | 25.5321 | 31200 | 0.3435 | 9918984 |
| 0.3395 | 25.6959 | 31400 | 0.3409 | 9982872 |
| 0.3486 | 25.8596 | 31600 | 0.3427 | 10046056 |
| 0.3176 | 26.0229 | 31800 | 0.3437 | 10109568 |
| 0.3398 | 26.1867 | 32000 | 0.3404 | 10173072 |
| 0.3515 | 26.3504 | 32200 | 0.3432 | 10236512 |
| 0.3292 | 26.5141 | 32400 | 0.3431 | 10299920 |
| 0.3336 | 26.6779 | 32600 | 0.3428 | 10363808 |
| 0.3551 | 26.8416 | 32800 | 0.3417 | 10427744 |
| 0.3327 | 27.0049 | 33000 | 0.3425 | 10491384 |
| 0.347 | 27.1686 | 33200 | 0.3419 | 10555192 |
| 0.3613 | 27.3324 | 33400 | 0.3444 | 10619080 |
| 0.3946 | 27.4961 | 33600 | 0.3408 | 10682424 |
| 0.325 | 27.6598 | 33800 | 0.3415 | 10746024 |
| 0.3064 | 27.8236 | 34000 | 0.3413 | 10809736 |
| 0.3768 | 27.9873 | 34200 | 0.3420 | 10873448 |
| 0.3476 | 28.1506 | 34400 | 0.3434 | 10936704 |
| 0.3491 | 28.3144 | 34600 | 0.3401 | 11000112 |
| 0.3311 | 28.4781 | 34800 | 0.3417 | 11063936 |
| 0.3356 | 28.6418 | 35000 | 0.3414 | 11128160 |
| 0.3316 | 28.8056 | 35200 | 0.3424 | 11191600 |
| 0.3294 | 28.9693 | 35400 | 0.3426 | 11255184 |
| 0.3253 | 29.1326 | 35600 | 0.3421 | 11318640 |
| 0.3424 | 29.2964 | 35800 | 0.3420 | 11382352 |
| 0.3419 | 29.4601 | 36000 | 0.3410 | 11446048 |
| 0.3129 | 29.6238 | 36200 | 0.3411 | 11509328 |
| 0.3309 | 29.7876 | 36400 | 0.3408 | 11573312 |
| 0.3477 | 29.9513 | 36600 | 0.3426 | 11636752 |
| 0.3555 | 30.1146 | 36800 | 0.3434 | 11700056 |
| 0.3449 | 30.2783 | 37000 | 0.3430 | 11763352 |
| 0.3533 | 30.4421 | 37200 | 0.3415 | 11826952 |
| 0.3442 | 30.6058 | 37400 | 0.3421 | 11890888 |
| 0.3441 | 30.7695 | 37600 | 0.3419 | 11954296 |
| 0.3564 | 30.9333 | 37800 | 0.3424 | 12017784 |
| 0.3582 | 31.0966 | 38000 | 0.3422 | 12081304 |
| 0.3418 | 31.2603 | 38200 | 0.3430 | 12145240 |
| 0.3733 | 31.4241 | 38400 | 0.3442 | 12208888 |
| 0.342 | 31.5878 | 38600 | 0.3433 | 12272344 |
| 0.3461 | 31.7515 | 38800 | 0.3431 | 12335960 |
| 0.3463 | 31.9153 | 39000 | 0.3428 | 12399064 |
| 0.3469 | 32.0786 | 39200 | 0.3425 | 12462200 |
| 0.3511 | 32.2423 | 39400 | 0.3425 | 12526024 |
| 0.3319 | 32.4061 | 39600 | 0.3424 | 12589496 |
| 0.3255 | 32.5698 | 39800 | 0.3426 | 12653080 |
| 0.3419 | 32.7335 | 40000 | 0.3423 | 12716696 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
yogevh/bert-finetuned-ner | yogevh | 2025-04-30T17:20:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-30T16:47:49Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Culturedniichan/mergekit-ties-bciqnej | Culturedniichan | 2025-04-30T17:19:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:ReadyArt/Forgotten-Safeword-24B-v4.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-24B-v4.0",
"base_model:TroyDoesAI/BlackSheep-24B",
"base_model:merge:TroyDoesAI/BlackSheep-24B",
"base_model:unsloth/Mistral-Small-24B-Base-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Base-2501",
"base_model:unsloth/Mistral-Small-24B-Instruct-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Instruct-2501",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:04:57Z | ---
base_model:
- unsloth/Mistral-Small-24B-Base-2501
- TroyDoesAI/BlackSheep-24B
- unsloth/Mistral-Small-24B-Instruct-2501
- ReadyArt/Forgotten-Safeword-24B-v4.0
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-24B-Instruct-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501) as a base.
### Models Merged
The following models were included in the merge:
* [unsloth/Mistral-Small-24B-Base-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Base-2501)
* [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B)
* [ReadyArt/Forgotten-Safeword-24B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v4.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/Mistral-Small-24B-Instruct-2501
- model: TroyDoesAI/BlackSheep-24B
parameters:
density: 0.40
weight: 0.55
- model: ReadyArt/Forgotten-Safeword-24B-v4.0
parameters:
density: 0.40
weight: 0.40
- model: unsloth/Mistral-Small-24B-Base-2501
parameters:
density: 0.40
weight: 0.15
merge_method: ties
base_model: unsloth/Mistral-Small-24B-Instruct-2501
parameters:
normalize: true
dtype: float16
tokenizer:
source: union
```
|
ClaMncDexter/gemma-3-1b-it-unsloth-bnb-4bit-float16 | ClaMncDexter | 2025-04-30T17:18:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:18:12Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ClaMncDexter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
straikerinc/Qwen2.5-VL-3B-Instruct-argus-384-updated | straikerinc | 2025-04-30T17:18:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-29T18:57:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ELHSI/llama-3.1-8b-ds-lbd-diagnostic-prediction-fine-tuned-model-v7 | ELHSI | 2025-04-30T17:18:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:12:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ClaMncDexter/gemma-3-1b-it-unsloth-bnb-4bit | ClaMncDexter | 2025-04-30T17:17:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:17:42Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ClaMncDexter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DataScienceWFSR/distilbert-food-product-rw | DataScienceWFSR | 2025-04-30T17:17:38Z | 3 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"region:us"
] | text-classification | 2025-04-28T15:03:48Z | ---
language:
- en
metrics:
- f1
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
---
# DistilBert Food Product Classification Model - Random Word Swapping Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food product text classification using random word swapping augmentation and distilbert-base-uncased.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'distilbert', 'product', 'rw'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `32`
- epochs: `5`
- lr_scheduler: `linear`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| **DistilBERT<sub>RW</sub>** | **0.749** | **0.747** | **0.647** | **0.261** | **0.753** | **0.462** |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
DataScienceWFSR/distilbert-food-hazard-rw | DataScienceWFSR | 2025-04-30T17:15:54Z | 2 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"region:us"
] | text-classification | 2025-04-30T10:22:53Z | ---
language:
- en
metrics:
- f1
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
---
# DistilBert Food Hazard Classification Model - Random Word Swapping Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food hazard text classification using random word swapping augmentation and distilbert-base-uncased.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'distilbert', 'hazard', 'rw'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `32`
- epochs: `10`
- lr_scheduler: `cosine with Restarts`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| **DistilBERT<sub>RW</sub>** | **0.749** | **0.747** | **0.647** | **0.261** | **0.753** | **0.462** |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
Zitamillian/Zita | Zitamillian | 2025-04-30T17:15:31Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T17:15:31Z | ---
license: apache-2.0
---
|
Yuhan123/ppo-reading-level-full-question-12th-1-steps-10000-epoch-999-best-eval-score-0.332 | Yuhan123 | 2025-04-30T17:15:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:12:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DataScienceWFSR/distilbert-food-product-category-rw | DataScienceWFSR | 2025-04-30T17:12:42Z | 3 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"region:us"
] | text-classification | 2025-04-28T15:01:19Z | ---
language:
- en
metrics:
- f1
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
---
# DistilBert Food Product Category Classification Model - Random Word Swapping Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food product-category text classification using random word swapping augmentation and distilbert-base-uncased.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'distilbert', 'product-category', 'rw'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `32`
- epochs: `3`
- lr_scheduler: `cosine`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| **DistilBERT<sub>RW</sub>** | **0.749** | **0.747** | **0.647** | **0.261** | **0.753** | **0.462** |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
DanielSc4/xlmr-large-classifier-beware_of_pity_de_tra1-eng | DanielSc4 | 2025-04-30T17:12:22Z | 1 | 0 | null | [
"safetensors",
"xlm-roberta",
"text-classification",
"eng",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-04-24T22:48:47Z | ---
language:
- eng
license: apache-2.0
tags:
- text-classification
pipeline_tag: text-classification
---
# xlmr-large-classifier-beware_of_pity_de_tra1-eng - MT/HT Classifier
This model is a fine-tuned version of [`FacebookAI/xlm-roberta-large`](https://huggingface.co/FacebookAI/xlm-roberta-large) for distinguishing between Machine Translated (MT) and Human Translated (HT) text
(or HT1 and HT2 if using two different human translators).
Training data:
* Train: 1212, for each label: 606
* Validation: 134, for each label: 67
* Test: 172, for each label: 86
Results on the held-out test set:
* Accuracy: 0.9535
* F1-Score: 0.9535
* Precision: 0.9535
* Recall: 0.9535
## label mapping
Label MT: 0
Label PE: 1 (this is the human translator)
## Info
Upload date: 2025-04-30 00:00
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("DanielSc4/xlmr-large-classifier-beware_of_pity_de_tra1-eng")
model = AutoModelForSequenceClassification.from_pretrained("DanielSc4/xlmr-large-classifier-beware_of_pity_de_tra1-eng")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inp = tokenizer('This is a test', return_tensors='pt').to(device)
model = model.to(device)
out = model(**inp)
logits = out.logits
probs = logits.softmax(dim=-1)
pred = probs.argmax(dim=-1).item()
print("Predicted class: " + str(pred)) # 0 for MT, 1 for PE
```
|
NikolayKozloff/helium-1-2b-science-Q8_0-GGUF | NikolayKozloff | 2025-04-30T17:12:17Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"base_model:kyutai/helium-1-2b-science",
"base_model:quantized:kyutai/helium-1-2b-science",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:12:03Z | ---
base_model: kyutai/helium-1-2b-science
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
library_name: transformers
license: cc-by-sa-4.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/helium-1-2b-science-Q8_0-GGUF
This model was converted to GGUF format from [`kyutai/helium-1-2b-science`](https://huggingface.co/kyutai/helium-1-2b-science) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kyutai/helium-1-2b-science) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/helium-1-2b-science-Q8_0-GGUF --hf-file helium-1-2b-science-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/helium-1-2b-science-Q8_0-GGUF --hf-file helium-1-2b-science-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/helium-1-2b-science-Q8_0-GGUF --hf-file helium-1-2b-science-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/helium-1-2b-science-Q8_0-GGUF --hf-file helium-1-2b-science-q8_0.gguf -c 2048
```
|
jmalejandrob79/nrmexp05 | jmalejandrob79 | 2025-04-30T17:11:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T14:44:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NRMEXP05
---
# Nrmexp05
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NRMEXP05` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NRMEXP05",
"lora_weights": "https://huggingface.co/jmalejandrob79/nrmexp05/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/nrmexp05', weight_name='lora.safetensors')
image = pipeline('NRMEXP05').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmexp05/discussions) to add images that show off what you’ve made with this LoRA.
|
KSJcompany/LLM-assignment1-datapreprocessing1 | KSJcompany | 2025-04-30T17:10:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T17:08:42Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KSJcompany
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yuhan123/ppo-reading-level-full-question-12th-1-steps-10000-epoch-999-best-eval-score-0.415 | Yuhan123 | 2025-04-30T17:09:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:06:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
filipesantoscv11/caf7f921-1204-453a-8dc3-a2a4ba6364bf | filipesantoscv11 | 2025-04-30T17:09:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T16:54:09Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: caf7f921-1204-453a-8dc3-a2a4ba6364bf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 9d9fc9e163a355df_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9d9fc9e163a355df_train_data.json
type:
field_input: input
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/caf7f921-1204-453a-8dc3-a2a4ba6364bf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/9d9fc9e163a355df_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 13d11646-21ab-4f31-95f7-0f1237974b02
wandb_project: s56-6
wandb_run: your_name
wandb_runid: 13d11646-21ab-4f31-95f7-0f1237974b02
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# caf7f921-1204-453a-8dc3-a2a4ba6364bf
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8831 | 0.0651 | 200 | 0.9089 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-polarity | ZeaNL | 2025-04-30T17:05:28Z | 0 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2025-04-30T17:04:58Z | ---
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '- they use fresh mozzarella instead of the:The pizza is delicious - they
use fresh mozzarella instead of the cheap, frozen, shredded cheese common to most
pizzaria''s.'
- text: 'refinement: Food, though somewhat:An oasis of refinement: Food, though
somewhat uneven, often reaches the pinnacles of new American fine cuisine - chef''s
passion (and kitchen''s precise execution) is most evident in the fish dishes
and soups.'
- text: We had the lobster sandwich and it was:We had the lobster sandwich and it
was FANTASTIC.
- text: The fish is fresh but:The fish is fresh but the variety of fish is nothing
out of ordinary.
- text: with classic upscale Italian decor.:Nice restaurant overall, with classic
upscale Italian decor.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: false
base_model: sentence-transformers/all-mpnet-base-v2
model-index:
- name: SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8188976377952756
name: Accuracy
---
# SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-aspect](https://huggingface.co/ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-aspect)
- **SetFitABSA Polarity Model:** [ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-polarity](https://huggingface.co/ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-polarity)
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negative | <ul><li>'But the staff was so horrible:But the staff was so horrible to us.'</li><li>', forgot our toast, left out:They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.'</li><li>'did not have mayonnaise, forgot our:They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.'</li></ul> |
| positive | <ul><li>"factor was the food, which was:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"The food is uniformly exceptional:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li><li>"a very capable kitchen which will proudly:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li></ul> |
| neutral | <ul><li>"'s on the menu or not.:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li><li>'to sample both meats).:Our agreed favorite is the orrechiete with sausage and chicken (usually the waiters are kind enough to split the dish in half so you get to sample both meats).'</li><li>'to split the dish in half so:Our agreed favorite is the orrechiete with sausage and chicken (usually the waiters are kind enough to split the dish in half so you get to sample both meats).'</li></ul> |
| conflict | <ul><li>'The food was delicious but:The food was delicious but do not come here on a empty stomach.'</li><li>"The service varys from day:The service varys from day to day- sometimes they're very nice, and sometimes not."</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8189 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-aspect",
"ZeaNL/setfit-absa-bge-small-en-v1.5-restaurants-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 6 | 21.3594 | 43 |
| Label | Training Sample Count |
|:---------|:----------------------|
| conflict | 2 |
| negative | 19 |
| neutral | 25 |
| positive | 82 |
### Training Hyperparameters
- batch_size: (128, 128)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0147 | 1 | 0.2714 | - |
| 0.7353 | 50 | 0.1474 | 0.1663 |
| 1.4706 | 100 | 0.0194 | 0.2206 |
| 2.2059 | 150 | 0.0012 | 0.2249 |
| 2.9412 | 200 | 0.0006 | 0.2240 |
| 3.6765 | 250 | 0.0004 | 0.2267 |
| 4.4118 | 300 | 0.0003 | 0.2275 |
### Framework Versions
- Python: 3.11.12
- SetFit: 1.1.2
- Sentence Transformers: 3.4.1
- spaCy: 3.7.5
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
DataScienceWFSR/bert-food-hazard-cw | DataScienceWFSR | 2025-04-30T17:03:27Z | 3 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] | text-classification | 2025-04-28T14:21:45Z | ---
language:
- en
metrics:
- f1
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
---
# Bert Food Hazard Classification Model - Contextual Word Insertion Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food hazard text classification using contextual word insertion augmentation and bert-base-uncased.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'bert', 'hazard', 'cw'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `8`
- epochs: `3`
- lr_scheduler: `cosine`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| **BERT<sub>CW</sub>** | **0.760** | **0.761** | **0.671** | **0.280** | **0.762** | **0.491** |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| DistilBERT<sub>RW</sub> | 0.749 | 0.747 | 0.647 | 0.261 | 0.753 | 0.462 |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and Özen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hürriyetoğlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris Özen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hürriyetoğlu
Contact: [email protected] |
Subsets and Splits