modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ultratopaz/798214
|
ultratopaz
| 2025-09-20T02:21:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:21:46Z |
[View on Civ Archive](https://civarchive.com/models/464876?modelVersionId=889746)
|
amethyst9/1624700
|
amethyst9
| 2025-09-20T02:21:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:21:17Z |
[View on Civ Archive](https://civarchive.com/models/1523876?modelVersionId=1724147)
|
crystalline7/1410544
|
crystalline7
| 2025-09-20T02:21:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:21:11Z |
[View on Civ Archive](https://civarchive.com/models/1337578?modelVersionId=1510463)
|
crystalline7/1050603
|
crystalline7
| 2025-09-20T02:21:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:20:58Z |
[View on Civ Archive](https://civarchive.com/models/523270?modelVersionId=1145579)
|
ultratopaz/765572
|
ultratopaz
| 2025-09-20T02:20:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:20:36Z |
[View on Civ Archive](https://civarchive.com/models/765792?modelVersionId=856541)
|
amethyst9/764882
|
amethyst9
| 2025-09-20T02:20:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:20:22Z |
[View on Civ Archive](https://civarchive.com/models/748235?modelVersionId=836958)
|
seraphimzzzz/1524765
|
seraphimzzzz
| 2025-09-20T02:20:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:19:58Z |
[View on Civ Archive](https://civarchive.com/models/1437229?modelVersionId=1624604)
|
amethyst9/862243
|
amethyst9
| 2025-09-20T02:19:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:19:51Z |
[View on Civ Archive](https://civarchive.com/models/157067?modelVersionId=955485)
|
ultratopaz/743871
|
ultratopaz
| 2025-09-20T02:19:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:19:31Z |
[View on Civ Archive](https://civarchive.com/models/483312?modelVersionId=829878)
|
crystalline7/1524789
|
crystalline7
| 2025-09-20T02:19:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:19:10Z |
[View on Civ Archive](https://civarchive.com/models/1437247?modelVersionId=1624630)
|
seraphimzzzz/1017382
|
seraphimzzzz
| 2025-09-20T02:19:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:19:03Z |
[View on Civ Archive](https://civarchive.com/models/823386?modelVersionId=1112580)
|
ultratopaz/859780
|
ultratopaz
| 2025-09-20T02:18:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:18:57Z |
[View on Civ Archive](https://civarchive.com/models/153318?modelVersionId=952986)
|
seraphimzzzz/888589
|
seraphimzzzz
| 2025-09-20T02:18:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:18:51Z |
[View on Civ Archive](https://civarchive.com/models/689491?modelVersionId=982453)
|
amethyst9/696198
|
amethyst9
| 2025-09-20T02:18:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:18:35Z |
[View on Civ Archive](https://civarchive.com/models/464876?modelVersionId=782769)
|
ultratopaz/872474
|
ultratopaz
| 2025-09-20T02:18:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:18:26Z |
[View on Civ Archive](https://civarchive.com/models/403828?modelVersionId=965923)
|
ultratopaz/1460531
|
ultratopaz
| 2025-09-20T02:17:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:17:45Z |
[View on Civ Archive](https://civarchive.com/models/1381145?modelVersionId=1560623)
|
crystalline7/520505
|
crystalline7
| 2025-09-20T02:17:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:17:36Z |
[View on Civ Archive](https://civarchive.com/models/544476?modelVersionId=605483)
|
ultratopaz/134757
|
ultratopaz
| 2025-09-20T02:17:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:17:14Z |
[View on Civ Archive](https://civarchive.com/models/157111?modelVersionId=176386)
|
seraphimzzzz/1480237
|
seraphimzzzz
| 2025-09-20T02:17:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:17:06Z |
[View on Civ Archive](https://civarchive.com/models/1398232?modelVersionId=1580482)
|
seraphimzzzz/1460607
|
seraphimzzzz
| 2025-09-20T02:17:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:17:00Z |
[View on Civ Archive](https://civarchive.com/models/1381224?modelVersionId=1560717)
|
seraphimzzzz/111297
|
seraphimzzzz
| 2025-09-20T02:16:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:16:50Z |
[View on Civ Archive](https://civarchive.com/models/135816?modelVersionId=149771)
|
luckeciano/Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_1053
|
luckeciano
| 2025-09-20T02:16:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T22:25:04Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_1053
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_1053
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_1053", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/q2w28jga)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amethyst9/1483078
|
amethyst9
| 2025-09-20T02:16:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:16:37Z |
[View on Civ Archive](https://civarchive.com/models/1400714?modelVersionId=1583326)
|
seraphimzzzz/911809
|
seraphimzzzz
| 2025-09-20T02:16:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:16:31Z |
[View on Civ Archive](https://civarchive.com/models/544476?modelVersionId=1006127)
|
seraphimzzzz/1480432
|
seraphimzzzz
| 2025-09-20T02:16:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:16:09Z |
[View on Civ Archive](https://civarchive.com/models/1398231?modelVersionId=1580481)
|
crystalline7/1031317
|
crystalline7
| 2025-09-20T02:15:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:15:46Z |
[View on Civ Archive](https://civarchive.com/models/135814?modelVersionId=1126607)
|
twelvehertz/open-o3-sft-11
|
twelvehertz
| 2025-09-20T02:15:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"region:us"
] |
text-generation
| 2025-09-20T02:15:40Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
amethyst9/558175
|
amethyst9
| 2025-09-20T02:15:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:15:37Z |
[View on Civ Archive](https://civarchive.com/models/152718?modelVersionId=643301)
|
seraphimzzzz/377535
|
seraphimzzzz
| 2025-09-20T02:15:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:15:08Z |
[View on Civ Archive](https://civarchive.com/models/411067?modelVersionId=458219)
|
ultratopaz/737992
|
ultratopaz
| 2025-09-20T02:14:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:14:53Z |
[View on Civ Archive](https://civarchive.com/models/736875?modelVersionId=824009)
|
seraphimzzzz/1072331
|
seraphimzzzz
| 2025-09-20T02:14:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:14:46Z |
[View on Civ Archive](https://civarchive.com/models/1040287?modelVersionId=1167062)
|
amethyst9/1480233
|
amethyst9
| 2025-09-20T02:13:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:13:39Z |
[View on Civ Archive](https://civarchive.com/models/1398227?modelVersionId=1580477)
|
ultratopaz/787242
|
ultratopaz
| 2025-09-20T02:13:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:13:32Z |
[View on Civ Archive](https://civarchive.com/models/553928?modelVersionId=878570)
|
crystalline7/880409
|
crystalline7
| 2025-09-20T02:13:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:13:18Z |
[View on Civ Archive](https://civarchive.com/models/647738?modelVersionId=974032)
|
amethyst9/520439
|
amethyst9
| 2025-09-20T02:13:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:13:09Z |
[View on Civ Archive](https://civarchive.com/models/544408?modelVersionId=605409)
|
seraphimzzzz/486169
|
seraphimzzzz
| 2025-09-20T02:12:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:12:52Z |
[View on Civ Archive](https://civarchive.com/models/513368?modelVersionId=570531)
|
nikilr/Llama3.1-8B-clustertax50
|
nikilr
| 2025-09-20T02:12:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T02:11:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ultratopaz/40170
|
ultratopaz
| 2025-09-20T02:12:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:12:24Z |
[View on Civ Archive](https://civarchive.com/models/51553?modelVersionId=56026)
|
amethyst9/578980
|
amethyst9
| 2025-09-20T02:12:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:12:13Z |
[View on Civ Archive](https://civarchive.com/models/594450?modelVersionId=663948)
|
ultratopaz/1011096
|
ultratopaz
| 2025-09-20T02:12:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:12:05Z |
[View on Civ Archive](https://civarchive.com/models/987374?modelVersionId=1106141)
|
crystalline7/134688
|
crystalline7
| 2025-09-20T02:11:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:11:50Z |
[View on Civ Archive](https://civarchive.com/models/157039?modelVersionId=176303)
|
amethyst9/1540861
|
amethyst9
| 2025-09-20T02:11:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:11:35Z |
[View on Civ Archive](https://civarchive.com/models/1450933?modelVersionId=1640479)
|
crystalline7/1645806
|
crystalline7
| 2025-09-20T02:11:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:11:28Z |
[View on Civ Archive](https://civarchive.com/models/1542353?modelVersionId=1745117)
|
crystalline7/1440876
|
crystalline7
| 2025-09-20T02:11:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:11:20Z |
[View on Civ Archive](https://civarchive.com/models/1363999?modelVersionId=1540966)
|
ultratopaz/1513471
|
ultratopaz
| 2025-09-20T02:11:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:11:08Z |
[View on Civ Archive](https://civarchive.com/models/1427304?modelVersionId=1613318)
|
ultratopaz/1565881
|
ultratopaz
| 2025-09-20T02:11:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:11:00Z |
[View on Civ Archive](https://civarchive.com/models/1472337?modelVersionId=1665347)
|
amethyst9/1061270
|
amethyst9
| 2025-09-20T02:10:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:10:54Z |
[View on Civ Archive](https://civarchive.com/models/1030771?modelVersionId=1156144)
|
amethyst9/845250
|
amethyst9
| 2025-09-20T02:10:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:10:48Z |
[View on Civ Archive](https://civarchive.com/models/513368?modelVersionId=937908)
|
seraphimzzzz/704709
|
seraphimzzzz
| 2025-09-20T02:10:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:10:27Z |
[View on Civ Archive](https://civarchive.com/models/482569?modelVersionId=791174)
|
crystalline7/1388249
|
crystalline7
| 2025-09-20T02:10:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:10:06Z |
[View on Civ Archive](https://civarchive.com/models/1317988?modelVersionId=1487940)
|
amethyst9/798192
|
amethyst9
| 2025-09-20T02:09:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:09:51Z |
[View on Civ Archive](https://civarchive.com/models/482569?modelVersionId=889718)
|
ultratopaz/838957
|
ultratopaz
| 2025-09-20T02:09:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:09:36Z |
[View on Civ Archive](https://civarchive.com/models/832785?modelVersionId=931622)
|
crystalline7/743905
|
crystalline7
| 2025-09-20T02:09:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:09:30Z |
[View on Civ Archive](https://civarchive.com/models/481207?modelVersionId=829906)
|
ultratopaz/901539
|
ultratopaz
| 2025-09-20T02:09:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:09:24Z |
[View on Civ Archive](https://civarchive.com/models/602511?modelVersionId=995845)
|
ultratopaz/1507566
|
ultratopaz
| 2025-09-20T02:09:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:09:18Z |
[View on Civ Archive](https://civarchive.com/models/1422216?modelVersionId=1607513)
|
ultratopaz/988125
|
ultratopaz
| 2025-09-20T02:08:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:08:36Z |
[View on Civ Archive](https://civarchive.com/models/963200?modelVersionId=1083107)
|
crystalline7/1503439
|
crystalline7
| 2025-09-20T02:08:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:08:24Z |
[View on Civ Archive](https://civarchive.com/models/1416842?modelVersionId=1601448)
|
amethyst9/723001
|
amethyst9
| 2025-09-20T02:08:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:08:07Z |
[View on Civ Archive](https://civarchive.com/models/136478?modelVersionId=809411)
|
seraphimzzzz/1013267
|
seraphimzzzz
| 2025-09-20T02:08:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-04T19:01:44Z |
[View on Civ Archive](https://civarchive.com/models/989352?modelVersionId=1108371)
|
amethyst9/1030870
|
amethyst9
| 2025-09-20T02:07:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:07:50Z |
[View on Civ Archive](https://civarchive.com/models/152718?modelVersionId=1126084)
|
ultratopaz/1650368
|
ultratopaz
| 2025-09-20T02:07:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:07:27Z |
[View on Civ Archive](https://civarchive.com/models/1546342?modelVersionId=1749663)
|
amethyst9/1504417
|
amethyst9
| 2025-09-20T02:07:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:07:04Z |
[View on Civ Archive](https://civarchive.com/models/989352?modelVersionId=1604308)
|
amethyst9/1419308
|
amethyst9
| 2025-09-20T02:06:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:06:47Z |
[View on Civ Archive](https://civarchive.com/models/1345001?modelVersionId=1519005)
|
crystalline7/1469569
|
crystalline7
| 2025-09-20T02:06:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:06:39Z |
[View on Civ Archive](https://civarchive.com/models/1388917?modelVersionId=1569640)
|
fromthesky/PLDR-LLM-v52-81M-FT-SC-1
|
fromthesky
| 2025-09-20T02:06:12Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pldrllm",
"text-classification",
"sentiment-analysis",
"sentiment-classification",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"custom_code",
"en",
"dataset:stanfordnlp/imdb",
"arxiv:2502.13502",
"arxiv:2306.01116",
"arxiv:2101.00027",
"arxiv:2410.16703",
"base_model:fromthesky/PLDR-LLM-v52-110M-1",
"base_model:finetune:fromthesky/PLDR-LLM-v52-110M-1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2025-08-29T11:10:12Z |
---
language:
- en
license: apache-2.0
tags:
- sentiment-analysis
- sentiment-classification
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
datasets:
- stanfordnlp/imdb
base_model:
- fromthesky/PLDR-LLM-v52-110M-1
pipeline_tag: text-classification
library_name: transformers
model-index:
- name: PLDR-LLM-v52-81M-FT-SC-1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: stanfordnlp/imdb
type: stanfordnlp/imdb
metrics:
- type: accuracy
value: 0.9466
name: Accuracy
- type: precision
value: 0.9463
name: Precision
- type: recall
value: 0.9489
name: Recall
- type: f1
value: 0.9476
name: F1
---
# PLDR-LLM-v52-81M-FT-SC-1
## Model Description
PLDR-LLM-v52-81M-FT-SC-1 is a finetuned PLDR-LLM (Large Language Model from Power Law Decoder Representations) with KV-cache and G-cache support for sequence classification. This model has a parameter size of 81M. It was finetuned using the [imdb dataset](https://huggingface.co/datasets/stanfordnlp/imdb) on the PLDR-LLM base model [PLDR-LLM-v52-110M-1](https://huggingface.co/fromthesky/PLDR-LLM-v52-110M-1).
More details about the PLDR-LLM architecture can be found in the research paper titled [PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference](https://arxiv.org/abs/2502.13502).
## Training data
PLDR-LLM-v52-81M-FT-SC-1 was finetuned using the [imdb dataset](https://huggingface.co/datasets/stanfordnlp/imdb) which is a large movie review dataset for binary sentiment analysis comprising of 25000 movie reviews for training and another 25000 reviews for testing. Base model was pretrained on the ~8B tokens from [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
The train and test data samples are combined and train and test splits were adjusted to get a total of 45000 samples for training and 5000 samples for validation. No data cleaning was done. This model was trained with the custom model implementation of PLDR-LLM for the Huggingface Transformers library. Following parameters were used for finetuning and other parameters were kept same as in the [research paper](https://arxiv.org/abs/2502.13502) detailing the PLDR-LLM architecture.
|Parameter | Value|
|----------|------|
| Learning rate | 7x10<sup>-5</sup> |
| Warm-up steps | 20 |
| Grad clip by norm | 1.0 |
| Epochs | 2 |
|Padding side| "right" |
|Add EOS token | True |
|min_lr_rate | 0.01 |
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out binary sentiment analysis prediction. The context length for this model is 1024 tokens.
## How to Use
### Via Huggingface Transformers Library
PLDR-LLM has custom model support for Huggingface Transformers library. PLDR-LLM custom models support was evaluated on Transformers 4.56.1 release available at the time.
```python
from transformers import pipeline
seq_classifier = pipeline(
task="sentiment-analysis",
model="fromthesky/PLDR-LLM-v52-81M-FT-SC-1",
device="cuda", # or "cpu"
trust_remote_code=True
)
text="""
Star Trek the Next Generation was arguably one of the most successful sci-fi shows \
in the late eighties and early nineties. With a cast that complemented each other's character \
seamlessly, the stories covered in the show touched on a wide variety of thought \
provoking issues such as a dying civilization's daring attempt to be remembered in \
"the Inner Light" and action packed two part episode with a cliffhanger in \
"Best of Both Worlds" against the formidable Borg Collective. The end result was \
a show that kept the audience engaged and entertained for the majority of the time it was on air.
"""
output=seq_classifier(text)
print(f"PREDICTION: {output}")
```
```
PREDICTION: [{'label': 'POSITIVE', 'score': 0.9999229907989502}]
```
#### Notes:
- This implementation of PLDR-LLM custom code was evaluated on Transformers 4.56.1 and pytorch 2.6.0.
### Limitations and Biases
This model was finetuned on a pretrained Large Language Model. Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
- Evaluation was done on 5000 samples that were used for validation.
|Metric | Value |
|-------------------|--------|
| Accuracy |0.9466|
| Precision |0.9463|
| Recall |0.9489|
| F1 |0.9476|
### BibTeX entry and citation info
```bibtex
@misc{gokden2025pldrllmkvgcache,
title={PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference},
author={Burc Gokden},
year={2025},
eprint={2502.13502},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13502},
}
@misc{gokden2024pldrllm,
title={PLDR-LLM: Large Language Model from Power Law Decoder Representations},
author={Burc Gokden},
year={2024},
eprint={2410.16703},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.16703},
}
```
|
amethyst9/1559811
|
amethyst9
| 2025-09-20T02:05:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:05:57Z |
[View on Civ Archive](https://civarchive.com/models/1467224?modelVersionId=1659431)
|
crystalline7/1494717
|
crystalline7
| 2025-09-20T02:05:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:05:44Z |
[View on Civ Archive](https://civarchive.com/models/1410706?modelVersionId=1594730)
|
crystalline7/880350
|
crystalline7
| 2025-09-20T02:05:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:05:31Z |
[View on Civ Archive](https://civarchive.com/models/564852?modelVersionId=973971)
|
ultratopaz/497308
|
ultratopaz
| 2025-09-20T02:05:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:05:17Z |
[View on Civ Archive](https://civarchive.com/models/523270?modelVersionId=581941)
|
crystalline7/1022488
|
crystalline7
| 2025-09-20T02:04:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:04:56Z |
[View on Civ Archive](https://civarchive.com/models/135043?modelVersionId=1117638)
|
amethyst9/1672530
|
amethyst9
| 2025-09-20T02:04:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:04:36Z |
[View on Civ Archive](https://civarchive.com/models/1565684?modelVersionId=1771761)
|
moyixiao/Qwen3-0.6B-bnpo-f16-250
|
moyixiao
| 2025-09-20T02:04:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T02:04:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ultratopaz/777816
|
ultratopaz
| 2025-09-20T02:04:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:04:24Z |
[View on Civ Archive](https://civarchive.com/models/553928?modelVersionId=869020)
|
seraphimzzzz/972555
|
seraphimzzzz
| 2025-09-20T02:04:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:04:12Z |
[View on Civ Archive](https://civarchive.com/models/923605?modelVersionId=1067039)
|
crystalline7/885186
|
crystalline7
| 2025-09-20T02:04:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:04:06Z |
[View on Civ Archive](https://civarchive.com/models/544335?modelVersionId=979086)
|
amethyst9/1604398
|
amethyst9
| 2025-09-20T02:03:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:03:47Z |
[View on Civ Archive](https://civarchive.com/models/1506263?modelVersionId=1703823)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758333755
|
schooncestiaa
| 2025-09-20T02:03:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T02:03:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/825574
|
seraphimzzzz
| 2025-09-20T02:03:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:03:38Z |
[View on Civ Archive](https://civarchive.com/models/821031?modelVersionId=918110)
|
pramitsaha/phi-3-mini-merged
|
pramitsaha
| 2025-09-20T02:03:35Z | 45 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-14T16:53:07Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** pramitsaha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
crystalline7/1473831
|
crystalline7
| 2025-09-20T02:03:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:03:16Z |
[View on Civ Archive](https://civarchive.com/models/1392617?modelVersionId=1574037)
|
ultratopaz/377307
|
ultratopaz
| 2025-09-20T02:03:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:03:07Z |
[View on Civ Archive](https://civarchive.com/models/410799?modelVersionId=457982)
|
crystalline7/997935
|
crystalline7
| 2025-09-20T02:02:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:02:28Z |
[View on Civ Archive](https://civarchive.com/models/790590?modelVersionId=1092909)
|
ultratopaz/1536347
|
ultratopaz
| 2025-09-20T02:02:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:02:22Z |
[View on Civ Archive](https://civarchive.com/models/1447072?modelVersionId=1635984)
|
crystalline7/1410486
|
crystalline7
| 2025-09-20T02:02:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:02:16Z |
[View on Civ Archive](https://civarchive.com/models/1337530?modelVersionId=1510408)
|
crystalline7/1670779
|
crystalline7
| 2025-09-20T02:02:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:02:10Z |
[View on Civ Archive](https://civarchive.com/models/1564132?modelVersionId=1769974)
|
amethyst9/1410518
|
amethyst9
| 2025-09-20T02:02:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:02:02Z |
[View on Civ Archive](https://civarchive.com/models/1337562?modelVersionId=1510442)
|
crystalline7/1559818
|
crystalline7
| 2025-09-20T02:01:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:01:49Z |
[View on Civ Archive](https://civarchive.com/models/1467230?modelVersionId=1659436)
|
seraphimzzzz/895216
|
seraphimzzzz
| 2025-09-20T02:01:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-04T18:17:03Z |
[View on Civ Archive](https://civarchive.com/models/528033?modelVersionId=989233)
|
seraphimzzzz/1031288
|
seraphimzzzz
| 2025-09-20T02:01:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:01:36Z |
[View on Civ Archive](https://civarchive.com/models/135813?modelVersionId=1126579)
|
amethyst9/1519734
|
amethyst9
| 2025-09-20T02:01:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:01:23Z |
[View on Civ Archive](https://civarchive.com/models/1432946?modelVersionId=1619764)
|
crystalline7/111298
|
crystalline7
| 2025-09-20T02:00:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:00:40Z |
[View on Civ Archive](https://civarchive.com/models/135813?modelVersionId=149773)
|
crystalline7/502514
|
crystalline7
| 2025-09-20T02:00:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:00:23Z |
[View on Civ Archive](https://civarchive.com/models/528033?modelVersionId=587220)
|
crystalline7/899606
|
crystalline7
| 2025-09-20T02:00:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:00:15Z |
[View on Civ Archive](https://civarchive.com/models/530792?modelVersionId=993766)
|
seraphimzzzz/667503
|
seraphimzzzz
| 2025-09-20T02:00:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-04T18:14:19Z |
[View on Civ Archive](https://civarchive.com/models/138576?modelVersionId=754163)
|
amethyst9/1617550
|
amethyst9
| 2025-09-20T01:59:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T01:59:37Z |
[View on Civ Archive](https://civarchive.com/models/1517767?modelVersionId=1717181)
|
ultratopaz/904625
|
ultratopaz
| 2025-09-20T01:59:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T01:59:23Z |
[View on Civ Archive](https://civarchive.com/models/557865?modelVersionId=998877)
|
namminh27/t5_finetuned_vietnamese
|
namminh27
| 2025-09-20T01:59:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:namminh27/speecht5_finetuned_vietnamese",
"base_model:finetune:namminh27/speecht5_finetuned_vietnamese",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-09-19T22:54:39Z |
---
library_name: transformers
license: mit
base_model: namminh27/speecht5_finetuned_vietnamese
tags:
- generated_from_trainer
model-index:
- name: t5_finetuned_vietnamese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_finetuned_vietnamese
This model is a fine-tuned version of [namminh27/speecht5_finetuned_vietnamese](https://huggingface.co/namminh27/speecht5_finetuned_vietnamese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7287 | 0.0466 | 100 | 0.6617 |
| 0.6723 | 0.0932 | 200 | 0.6190 |
| 0.6317 | 0.1398 | 300 | 0.5963 |
| 0.6235 | 0.1864 | 400 | 0.5849 |
| 0.6137 | 0.2330 | 500 | 0.5645 |
| 0.6143 | 0.2797 | 600 | 0.5634 |
| 0.5896 | 0.3263 | 700 | 0.5574 |
| 0.5999 | 0.3729 | 800 | 0.5505 |
| 0.5836 | 0.4195 | 900 | 0.5437 |
| 0.5865 | 0.4661 | 1000 | 0.5442 |
| 0.582 | 0.5127 | 1100 | 0.5462 |
| 0.5846 | 0.5593 | 1200 | 0.5409 |
| 0.5857 | 0.6059 | 1300 | 0.5478 |
| 0.5939 | 0.1631 | 1400 | 0.5501 |
| 0.5988 | 0.1748 | 1500 | 0.5496 |
| 0.6149 | 0.1864 | 1600 | 0.5535 |
| 0.6106 | 0.1981 | 1700 | 0.5487 |
| 0.6039 | 0.2097 | 1800 | 0.5449 |
| 0.5778 | 0.2214 | 1900 | 0.5441 |
| 0.583 | 0.2330 | 2000 | 0.5318 |
| 0.5762 | 0.2447 | 2100 | 0.5353 |
| 0.5849 | 0.2564 | 2200 | 0.5330 |
| 0.5772 | 0.2680 | 2300 | 0.5303 |
| 0.5926 | 0.2797 | 2400 | 0.5243 |
| 0.5749 | 0.2913 | 2500 | 0.5271 |
| 0.5711 | 0.3030 | 2600 | 0.5232 |
| 0.5536 | 0.1573 | 2700 | 0.5284 |
| 0.5669 | 0.1631 | 2800 | 0.5280 |
| 0.5744 | 0.1690 | 2900 | 0.5252 |
| 0.5806 | 0.1748 | 3000 | 0.5247 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.19.1
|
crystalline7/130612
|
crystalline7
| 2025-09-20T01:59:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T01:58:59Z |
[View on Civ Archive](https://civarchive.com/models/153293?modelVersionId=171657)
|
seraphimzzzz/876799
|
seraphimzzzz
| 2025-09-20T01:58:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-04T18:19:04Z |
[View on Civ Archive](https://civarchive.com/models/523270?modelVersionId=970337)
|
amethyst9/1004085
|
amethyst9
| 2025-09-20T01:58:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T01:58:45Z |
[View on Civ Archive](https://civarchive.com/models/832785?modelVersionId=1099064)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.