modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.75_0.5_epoch1 | MinaMila | 2025-06-16T08:15:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T08:14:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/starcoder2-3b_review-aspect | aieng-lab | 2025-06-16T08:13:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"starcoder2",
"text-classification",
"en",
"base_model:bigcode/starcoder2-3b",
"base_model:finetune:bigcode/starcoder2-3b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T08:11:28Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bigcode/starcoder2-3b
pipeline_tag: text-classification
---
# StarCoder2 3b for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.05_epoch2 | MinaMila | 2025-06-16T08:12:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T08:10:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.75_0.75_epoch2 | MinaMila | 2025-06-16T08:08:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T08:06:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kucher7serg/j1ane | kucher7serg | 2025-06-16T08:08:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T08:08:38Z | ---
license: apache-2.0
---
|
cswind/DeepRL-u2-taxi | cswind | 2025-06-16T08:08:08Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T08:08:04Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DeepRL-u2-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cswind/DeepRL-u2-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JayHyeon/Qwen_1.5B-math-VIPO_5e-7_10.0vpo_constant-1ep | JayHyeon | 2025-06-16T08:07:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:argilla/distilabel-math-preference-dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:39:54Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: argilla/distilabel-math-preference-dpo
library_name: transformers
model_name: Qwen_1.5B-math-VIPO_5e-7_10.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_1.5B-math-VIPO_5e-7_10.0vpo_constant-1ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-VIPO_5e-7_10.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/n38x57zk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.05_epoch1 | MinaMila | 2025-06-16T08:04:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T08:02:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Arsen008/wav2vec2-large-mms-1b-turkish-colab | Arsen008 | 2025-06-16T08:03:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-10T09:42:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
srivihari/resume-job-role-classifier | srivihari | 2025-06-16T08:03:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"resume",
"job-role-classification",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-15T17:33:43Z | ---
license: apache-2.0
tags:
- text-classification
- resume
- job-role-classification
- transformers
- distilbert
datasets:
- custom
language:
- en
pipeline_tag: text-classification
---
# ๐ Resume Job Role Classifier
A fine-tuned [`DistilBERT`](https://huggingface.co/distilbert-base-uncased) model to classify job roles based on resume content. This model is trained to predict the most likely profession from the given resume text, supporting over 14 different job categories.
---
## ๐ง Model Details
- **Architecture**: DistilBERT (base, uncased)
- **Task**: Multi-class text classification
- **Input**: Raw resume text (English)
- **Output**: Predicted job category label and score
---
## ๐ Labels Covered
This model supports classification into the following job categories:
- Data Science
- Java Developer
- Web Designing
- HR
- Mechanical Engineer
- Electrical Engineering
- Civil Engineer
- Arts
- Advocate
- Sales
- Health and fitness
- Business Analyst
- SAP Developer
- Automation Testing
---
## ๐๏ธ Training
- **Dataset**: Custom dataset containing labeled resumes
- **Split**: 80% train / 20% test
- **Metrics**:
- Accuracy: 99โ100%
- F1 Score: ~0.99โ1.00 (macro avg)
- **Epochs**: 3
- **Batch size**: 8
- **Optimizer**: AdamW
---
## ๐ฅ How to Use
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="srivihari/resume-job-role-classifier")
result = classifier("Experienced data scientist with Python, machine learning, and statistics background.")
print(result)
# Example output:
# [{'label': 'Data Science', 'score': 0.97}]
#Note: If you face any issues like token_type_ids errors, make sure to adjust tokenizer config as below:
#from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
#model = AutoModelForSequenceClassification.from_pretrained("srivihari/resume-job-role-classifier")
#tokenizer = AutoTokenizer.from_pretrained("srivihari/resume-job-role-classifier")
#tokenizer.model_input_names = ["input_ids", "attention_mask"]
#classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
|
SillyChilly25/medgemma-brain-cancer | SillyChilly25 | 2025-06-16T08:03:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-13T12:16:29Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-brain-cancer
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-brain-cancer
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SillyChilly25/medgemma-brain-cancer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hitenhasija888-vellore-institute-of-technology/huggingface/runs/hzw9f5kl)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.75_0.75_epoch1 | MinaMila | 2025-06-16T08:02:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T08:00:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rmdhirr/suja-lorab-ep5-suja-7000 | rmdhirr | 2025-06-16T08:01:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-16T08:00:41Z | ---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
aieng-lab/codebert-base_review-aspect | aieng-lab | 2025-06-16T08:00:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T08:00:20Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- microsoft/codebert-base
pipeline_tag: text-classification
---
# CodeBERT base for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Nerva1228/liupopo | Nerva1228 | 2025-06-16T07:56:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:56:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: liupopo
---
# Liupopo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `liupopo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "liupopo",
"lora_weights": "https://huggingface.co/Nerva1228/liupopo/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/liupopo', weight_name='lora.safetensors')
image = pipeline('liupopo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/liupopo/discussions) to add images that show off what youโve made with this LoRA.
|
JayHyeon/Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep | JayHyeon | 2025-06-16T07:54:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:argilla/distilabel-math-preference-dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:29:43Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: argilla/distilabel-math-preference-dpo
library_name: transformers
model_name: Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/0fnh12lt)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
aieng-lab/t5-large_review-aspect | aieng-lab | 2025-06-16T07:54:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:53:50Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-large
pipeline_tag: text-classification
---
# T5 large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-large](https://huggingface.co/t5-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aitechleart/Andrij | aitechleart | 2025-06-16T07:54:13Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-16T07:11:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
SidXXD/eps8-noise_upscaling-f_26 | SidXXD | 2025-06-16T07:53:35Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-06-16T07:47:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/eps8-noise_upscaling-f_26
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b | tscstudios | 2025-06-16T07:53:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:53:09Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Efphy0Itvsbuqapuj0Ui31J6Gff3_24705Bf4 0112 49D1 Ab57 4A053Ed9A09B
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b/discussions) to add images that show off what youโve made with this LoRA.
|
csukuangfj/vits-piper-de_DE-glados_turret-medium | csukuangfj | 2025-06-16T07:52:50Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:08:40Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF | Triangle104 | 2025-06-16T07:52:45Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:24:05Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
---
Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3.
- Finetuned on our DevOps and architecture reasoning and code reasoning data generated with Deepseek R1!
- Improved general and creative reasoning to supplement problem-solving and general chat performance.
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -c 2048
```
|
johngreendr1/f1141c6c-f51c-4e86-9dbd-a4549c81238c | johngreendr1 | 2025-06-16T07:52:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
] | null | 2025-06-16T03:05:12Z | ---
base_model: codellama/CodeLlama-7b-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
aieng-lab/t5-small_review-aspect | aieng-lab | 2025-06-16T07:52:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:52:21Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-small
pipeline_tag: text-classification
---
# T5 small for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-small](https://huggingface.co/t5-small)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Yasmineassia/RE-EnrichedPubMedBERT-finetuned-GAD | Yasmineassia | 2025-06-16T07:51:12Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"license:cc",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T05:24:44Z | ---
library_name: transformers
license: cc
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
RE-EnrichedPubMed-finetuned-gene-disease
Finetuned from model: [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext]
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lendron555/ilyas | lendron555 | 2025-06-16T07:51:10Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:25:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ilyas
---
# Ilyas
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ilyas` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ilyas",
"lora_weights": "https://huggingface.co/lendron555/ilyas/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lendron555/ilyas', weight_name='lora.safetensors')
image = pipeline('ilyas').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/lendron555/ilyas/discussions) to add images that show off what youโve made with this LoRA.
|
Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF | Triangle104 | 2025-06-16T07:50:42Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:44:44Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -c 2048
```
|
veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker | veddhanth | 2025-06-16T07:50:21Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T07:38:58Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.15_epoch1 | MinaMila | 2025-06-16T07:48:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:46:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.05_epoch1 | MinaMila | 2025-06-16T07:48:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:46:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF | Triangle104 | 2025-06-16T07:48:08Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:35:07Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -c 2048
```
|
xiaoyuanliu/Qwen2.5-7B-Instruct-MathHard-PPO-RISE012 | xiaoyuanliu | 2025-06-16T07:47:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:40:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.25_epoch2 | MinaMila | 2025-06-16T07:40:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:38:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sungwoo1/roberta-base-klue-ynat-classification2 | sungwoo1 | 2025-06-16T07:39:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:39:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
silent666/task-10-microsoft-Phi-3.5-mini-instruct | silent666 | 2025-06-16T07:37:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-06-16T01:03:11Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
csukuangfj/vits-piper-de_DE-glados-medium | csukuangfj | 2025-06-16T07:36:48Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:07:38Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
csukuangfj/vits-piper-de_DE-glados-low | csukuangfj | 2025-06-16T07:36:27Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:07:24Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
aieng-lab/roberta-base_review-aspect | aieng-lab | 2025-06-16T07:35:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:35:29Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- roberta-base
pipeline_tag: text-classification
---
# RoBERTa base for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [roberta-base](https://huggingface.co/roberta-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/bert-base-cased_review-aspect | aieng-lab | 2025-06-16T07:34:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:34:07Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-base-cased
pipeline_tag: text-classification
---
# BERT base for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
xfjcoder/llama3.1-8b-erged-6bit | xfjcoder | 2025-06-16T07:31:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:31:17Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xfjcoder
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.25_epoch2 | MinaMila | 2025-06-16T07:28:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:26:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JayHyeon/Qwen_1.5B-math-IPO_5e-7_1.0vpo_constant-1ep | JayHyeon | 2025-06-16T07:27:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:argilla/distilabel-math-preference-dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-15T07:39:02Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: argilla/distilabel-math-preference-dpo
library_name: transformers
model_name: Qwen_1.5B-math-IPO_5e-7_1.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_1.5B-math-IPO_5e-7_1.0vpo_constant-1ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-IPO_5e-7_1.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/gwchm69z)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DLxiaoying/distilbert-base-uncased-finetuned-clinc | DLxiaoying | 2025-06-16T07:25:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T06:30:52Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8063
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3296 | 1.0 | 318 | 3.3392 | 0.7313 |
| 2.6885 | 2.0 | 636 | 1.9295 | 0.8465 |
| 1.6035 | 3.0 | 954 | 1.2026 | 0.8965 |
| 1.0561 | 4.0 | 1272 | 0.8956 | 0.9113 |
| 0.8334 | 5.0 | 1590 | 0.8063 | 0.9161 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sallani/Urban_CO2_Predictor_Small_LMM_LoRA_GGUF | sallani | 2025-06-16T07:25:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Co2Emission",
"Vanet",
"Urban mobility",
"Self",
"Driving",
"zero-shot-classification",
"fr",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2025-06-15T19:55:18Z | ---
license: apache-2.0
language:
- fr
- en
metrics:
- accuracy
- brier_score
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: zero-shot-classification
tags:
- Co2Emission
- Vanet
- Urban mobility
- Self
- Driving
library_name: transformers
---
# Urban\_CO2\_Predictor\_Small\_LMM\_LoRA\_GGUF
Author: **Dr. Sabri Allani**
AI & Cybersecurity Expert | R\&D Consultant | Instructor | Open-Source Contributor
Model type: GGUF (Urban\_CO2\_Predictor\_Small\_Edge\_LLM based on Mistral-7B Instruct, quantized)
Parameters: 7B
File size: \~3GB
License: MIT
---
## Overview
I am releasing this model as a GGUF-quantized version of Mistral-7B Instruct, prepared for research, experimentation, and further customization for urban COโ emission prediction and general LLM tasks.
This release is intended for anyone working on environmental analytics, LoRA fine-tuning, or practical LLM deployment with open-source tools.
The model is especially relevant for **COโ emission prediction in VANET (Vehicular Ad Hoc Networks) and urban mobility environments**. It can be prompted with contextual information to generate approximate COโ estimates based on traffic, vehicle type, density, and time of day.
* Base model: Mistral-7B Instruct
* Format: GGUF (compatible with llama.cpp, llama-cpp-python, LM Studio, ollama, KoboldCpp, etc.)
* Quantization: \[specify, e.g. Q4\_0, Q5\_K, Q6\_K, etc., if known]
* Size: \~3GB
---
๐ฅ **Download model file**:
[โก๏ธ Click here to download `Urban_CO2_Predictor_Small_Edge_LLM.gguf`](https://huggingface.co/sallani/Urban_CO2_Predictor_Small_LMM_LoRA_GGUF/resolve/main/Urban_CO2_Predictor_Small_Edge_LLM.gguf)
(~3.08 GB GGUF quantized model for edge inference and urban COโ prediction)
## Intended Use
* Urban and environmental COโ emission analysis and prediction (baseline, demo, or experimental)
* LoRA and transfer learning for domain adaptation
* General language tasks on CPU/GPU (local, edge, or cloud inference)
* Research, prototyping, or educational work
* Prediction in VANETs and real-time urban mobility systems
---
## Example usage
```python
from llama_cpp import Llama
# Load the quantized GGUF model
llm = Llama(model_path="co2_merged.gguf")
# Provide explicit parameters in the prompt
prompt = (
"Estimate COโ emissions (g/km) for the following scenario:
"
"- Location: Central Paris urban boulevard
"
"- Timestamp: 07:45, weekday (rush hour)
"
"- Weather: Cloudy, 18โฏยฐC
"
"- Traffic density: 60 vehicles per km
"
"- Vehicle mix: 70โฏ% diesel Euroโฏ6, 20โฏ% petrol Euroโฏ5, 10โฏ% electric
"
"- Average speed: 28โฏkm/h
"
"Return only the numeric estimate followed by 'g/km'."
)
response = llm(prompt)
print(response)
```
---
## Technical details
* Architecture: Mistral 7B, instruct-tuned
* File format: GGUF
* File size: \~3GB
* License: MIT
---
## Citation
If you use this model in your work, please cite:
```bibtex
@misc{allani2024urban,
author = {Dr. Sabri Allani},
title = {Urban COโ Predictor Small LMM LoRA GGUF},
year = {2024},
howpublished = {\url{https://huggingface.co/sallani/Urban_CO2_Predictor_Small_LMM_LoRA_GGUF}},
note = {ORCID: https://orcid.org/0000-0003-0643-5067}
}
```
---
## Contact
For questions or collaboration, contact me via:
* [LinkedIn](https://www.linkedin.com/in/sabri-allani)
* [Hugging Face Profile](https://huggingface.co/sallani)
---
## Disclaimer
This model is provided as-is for research and development.
I make no warranty for production use or accuracy in real-world COโ prediction tasks.
Use at your own risk and adapt for your own projects as needed.
---
## License
MIT โ free to use, modify, and redistribute.
---
### Acknowledgment
This file is derived from Mistral-7B Instruct and adapted to GGUF format for open research in AI and environmental modeling. |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.5_epoch2 | MinaMila | 2025-06-16T07:24:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:22:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Timia123/hint_24k_550 | Timia123 | 2025-06-16T07:23:44Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T07:14:15Z | ---
license: apache-2.0
---
|
Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF | Triangle104 | 2025-06-16T07:22:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:18:55Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -c 2048
```
|
Antonio06188/Stage | Antonio06188 | 2025-06-16T07:17:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-10T11:55:30Z | ## Pour utiliser les jupyter notebook et pouvoir reproduire les rรฉsultats
- ### Gรฉnรฉrer les diffรฉrents datasets : Voir le __[README.md](./Dataset/README.md)__
- ### Installer le models Vosks : Voir le __[README](./Models/README.md)__
- ### Install pydup for python to manipulate audio
Need to install the dependencies ffmpeg :
Installing FFmpeg on Windows:
- Step 1: Download FFmpeg
Go to the official FFmpeg website: https://ffmpeg.org/download.html
Click on the Windows icon, then select the Windows builds by BtbN link.
Download the latest version from the provided links, e.g., ffmpeg-git-full.7z.
- Step 2: Extract the Files
Extract the downloaded .tar.xz file using a tool like 7-Zip or WinRAR.
After extraction, navigate to the folder that contains the extracted files.
- Step 3: Add FFmpeg to the PATH
Open the folder that contains the extracted FFmpeg files.
Copy the path of the bin folder (e.g., C:\ffmpeg\bin).
Open the Start Menu and search for Environment Variables.
In the System Properties window, click Environment Variables.
Under System Variables, find the Path variable, then click Edit.
In the Edit Environment Variable window, click New and paste the path to the bin folder you copied earlier.
Click OK to apply the changes.
- Step 4: Verify Installation
Open a new Command Prompt and type:
ffmpeg -version
You should see information about the installed FFmpeg version.
- ### Suivre les notebooks dans l'ordre :
- compareASR :
Il aura des dossiers ร crรฉer mais dรฉcommenter les lignes mkdirs dans les fonctions cleanFolders pour les crรฉer puis les commenter une nouvelle fois.
Pour l'รฉval il aura certain model wav2vec2 que vous n'aurez pas il faudra lancer suitecompareASR pour les avoir.
- suiteCompareASR :
La premiรจre partie permet d'analyser les rรฉsultats de รฉval de compareASR et la deuxiรจme partie permet d'entrainer certains modรจles wav2vec2 pour le franรงais et ils seront รฉvaluรฉs dans compareASR
- kenLMPrepare :
Prepare language model pour le decoder des ASR modรจles, Gรฉnรจre les corpus dans le datasets et list les commandes ร faire sur linux pour entraรฎner les kenLM et รฉvalue la performances des diffรฉrents kenLM |
Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF | Triangle104 | 2025-06-16T07:17:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:15:07Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -c 2048
```
|
allura-org/Q3-8B-Kintsugi | allura-org | 2025-06-16T07:16:59Z | 8 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T21:25:09Z | ---
license: apache-2.0
base_model: Qwen/Qwen3-8B-Base
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Q3-8B-Kintsugi

<small><i>get it? because kintsugi sounds like kitsune? hahaha-</i></small>
# Overview
***Q3-8B-Kintsugi*** is a roleplaying model finetuned from [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
# Quantizations
EXL3:
- [Official EXL3 quant repo](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-EXL3)
GGUF:
- [Official static GGUF quants](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-GGUF)
MLX:
- [8, 6, and 4bpw MLX-formrt quants by soundTeam](https://huggingface.co/collections/allura-quants/q3-8b-kintsugi-mlx-684fc48444f1214749f538c4)
# Usage
- Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do *not* need to prefill empty think tags for it not to reason -- see below).
- Settings used by testers varied, but we generally stayed around 0.9 temperature and 0.1 min p. Do *not* use repetition penalties (DRY included). They break it.
- Any system prompt can likely be used, but I used the Shingame system prompt (link will be added later i promise)
- The official instruction following version of Qwen3-8B was not used as a base. Instruction-following is trained in post-hoc, and "thinking" traces were not included. __As a result of this, "thinking" will not function.__
# Training Process
1. The [base model](https://huggingface.co/Qwen/Qwen3-8B-Base) first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the [Ink](https://huggingface.co/collections/allura-org/ink-6772fd1442308781594bbabb)/[Bigger Body](https://huggingface.co/collections/allura-org/bigger-body-67b277af0861cec33b54745d)/[Remnant](https://huggingface.co/collections/allura-org/remnant-6817c2113bbb2aed501513d0) lineage.
2. Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".
Both stages here are very similar to [Q3-30B-A3B-Designant](https://huggingface.co/allura-org/Q3-30B-A3B-Designant), which went through a very similar process with the same data.
# Credits
- Fizz - Training, Data Wrangling
- Toaster, Mango, Bot, probably others I forgot ;-; - Testing
- inflatebot - original Designant model card that this one was yoinked from
- Artus - Funding
- Alibaba - Making the original model
- Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
- All quanters, inside and outside the org, specifically Artus, Lyra, and soundTeam/Heni
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3 |
Wilbur1240/ppo-pyramid | Wilbur1240 | 2025-06-16T07:16:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-06-16T07:16:31Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wilbur1240/ppo-pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF | Triangle104 | 2025-06-16T07:15:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:12:01Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -c 2048
```
|
Sumail/Eurus9 | Sumail | 2025-06-16T07:15:20Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-22T09:24:47Z | ---
base_model:
- itorgov/model-1723976476
- itorgov/model-1723975614
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [itorgov/model-1723976476](https://huggingface.co/itorgov/model-1723976476)
* [itorgov/model-1723975614](https://huggingface.co/itorgov/model-1723975614)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: itorgov/model-1723975614
layer_range: [0, 48]
- model: itorgov/model-1723976476
layer_range: [0, 48]
merge_method: slerp
base_model: itorgov/model-1723975614
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.5_epoch2 | MinaMila | 2025-06-16T07:14:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:12:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fsa32442/cl-tohoku-bert-base-japanese-v3-ner-wikipedia-dataset | fsa32442 | 2025-06-16T07:13:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-16T07:12:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katiecloudda/tuned-emoji | katiecloudda | 2025-06-16T07:11:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:10:29Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** katiecloudda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/jmota27-ACT_BBOX-boats_datasets-u50na | phospho-app | 2025-06-16T07:10:27Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T07:05:08Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black boat' was detected in 0 episodes in secondary_0 camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/jmota27/boats_datasets/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [jmota27/boats_datasets](https://huggingface.co/datasets/jmota27/boats_datasets)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch2 | MinaMila | 2025-06-16T07:08:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlicanKiraz0/SenecaLLM_x_gemma27b-v2 | AlicanKiraz0 | 2025-06-16T07:08:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T07:03:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/ChamSkirkPonyXL | LarryAIDraw | 2025-06-16T07:06:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:48Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/503333/skirk-or-genshin-impact-or-pony-xl |
LarryAIDraw/Skirk_v2.0_pony-000034 | LarryAIDraw | 2025-06-16T07:05:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:24Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/598575/genshin-impactskirkpony |
namdp-ptit/LLamaRE-8B-Instruct-ZeroShot | namdp-ptit | 2025-06-16T07:05:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"transformer",
"classification",
"token-classification",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-16T06:09:47Z | ---
license: apache-2.0
language:
- en
base_model:
- unsloth/Meta-Llama-3.1-8B-Instruct
pipeline_tag: token-classification
library_name: transformers
tags:
- transformer
- classification
---
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('namdp-ptit/LLamaRE-8B-Instruct-ZeroShot')
model = AutoModelForCausalLM.from_pretrained(
'namdp-ptit/LLamaRE-8B-Instruct-ZeroShot',
torch_dtype="auto",
device_map="cuda",
)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
user_prompt = """
Extract relationships between entities in text **strictly using ONLY the provided Relationship List** below and **MUST** strictly adhere to the output format.
Format each relationship as '<relation_type>: <head_entity>, <tail_entity>' and separated multiple relationship by '|'. Return 'None' if no relationships are identified.
Relationship List: {re_labels}
Text: {text}
"""
query = 'An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion.'
re_labels = ["Organization based in", "Located in", "Live in", "Work for", "Kill"]
user_prompt = user_prompt.format(re_labels=ner_labels, text=query)
messages = [
{
"role": "system",
"content": "You are an expert in Relation Extraction (RE) task."
},
{
"role": "user",
"content": user_prompt
}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response) # Organization based in: Hakawati Theatre, Jerusalem
```
## Contact
**Email**: [email protected]
**LinkedIn**: [Dang Phuong Nam](https://www.linkedin.com/in/dang-phuong-nam-157912288/)
**Facebook**: [Phฦฐฦกng Nam](https://www.facebook.com/phuong.namdang.7146557)
## Support The Project
If you find this project helpful and wish to support its ongoing development, here are some ways you can contribute:
1. **Star the Repository**: Show your appreciation by starring the repository. Your support motivates further
development
and enhancements.
2. **Contribute**: We welcome your contributions! You can help by reporting bugs, submitting pull requests, or
suggesting new features.
3. **Donate**: If youโd like to support financially, consider making a donation. You can donate through:
- Vietcombank: 9912692172 - DANG PHUONG NAM
Thank you for your support!
## Citation
Please cite as
```Plaintext
@misc{LlamaRE-8B-Instruct-ZeroShot,
title={LlamaRE: An Large Language Model for Relation Extraction},
author={Nam Dang Phuong},
year={2025},
publisher={Huggingface},
}
``` |
LarryAIDraw/skirk_genshinPDXL_scarxzys | LarryAIDraw | 2025-06-16T07:04:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:01Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1062378/pony-skirk-or-genshin-impact |
Wilbur1240/ppo-SnowballTarget | Wilbur1240 | 2025-06-16T07:03:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-16T07:03:34Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wilbur1240/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.75_epoch2 | MinaMila | 2025-06-16T07:00:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:59:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch1 | MinaMila | 2025-06-16T07:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:58:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Baron-qui/distilhubert-finetuned-gtzan | Baron-qui | 2025-06-16T06:59:19Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-06-14T00:12:46Z | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5618
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8647 | 1.0 | 150 | 1.7671 | 0.58 |
| 1.0983 | 2.0 | 300 | 1.1722 | 0.65 |
| 0.875 | 3.0 | 450 | 0.9809 | 0.73 |
| 0.596 | 4.0 | 600 | 0.9323 | 0.75 |
| 0.4549 | 5.0 | 750 | 0.6444 | 0.82 |
| 0.1644 | 6.0 | 900 | 0.5420 | 0.85 |
| 0.136 | 7.0 | 1050 | 0.5333 | 0.82 |
| 0.1289 | 8.0 | 1200 | 0.6917 | 0.82 |
| 0.029 | 9.0 | 1350 | 0.5613 | 0.85 |
| 0.0409 | 10.0 | 1500 | 0.5618 | 0.84 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
numiros/Comma-Epsilon-v0.1-exl2 | numiros | 2025-06-16T06:55:19Z | 0 | 0 | null | [
"exl2",
"base_model:numiros/Comma-Epsilon-v0.1",
"base_model:finetune:numiros/Comma-Epsilon-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:47:23Z | ---
license: apache-2.0
base_model:
- numiros/Comma-Epsilon-v0.1
tags:
- exl2
---
[4bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/4bpw)
[5bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/5bpw) |
John6666/satyr-remix-ankara-illustrious-v17-sdxl | John6666 | 2025-06-16T06:55:10Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"paintery",
"styles",
"prompt comphrehension",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:49:23Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- paintery
- styles
- prompt comphrehension
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/974951?modelVersionId=1905968).
This model created by [Labdoge207](https://civitai.com/user/Labdoge207).
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.05_epoch2 | MinaMila | 2025-06-16T06:52:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:50:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
csukuangfj/en_US-glados-high | csukuangfj | 2025-06-16T06:51:42Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T06:42:28Z | # Introduction
See https://drive.google.com/file/d/1t2D7zP-e2flduS5duHm__UMB9RjuGqWK/view
and
https://github.com/rhasspy/piper/issues/187#issuecomment-1805709037 |
ricardoguerreiro1800/msa0o | ricardoguerreiro1800 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
eduardamendes1094/msa0o | eduardamendes1094 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
danielpacheco9468/msa0o | danielpacheco9468 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
John6666/noobai-xl-nai-xl-v-pred-colorfixed-v20-sdxl | John6666 | 2025-06-16T06:49:21Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"colorfix",
"contrast",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:43:11Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- colorfix
- contrast
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1672827?modelVersionId=1907150).
This model created by [Volnovik](https://civitai.com/user/Volnovik).
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.05_epoch2 | MinaMila | 2025-06-16T06:47:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:45:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Danteigxs/Pirocao | Danteigxs | 2025-06-16T06:47:02Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-06-16T06:47:02Z | ---
license: artistic-2.0
---
|
himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455 | himedia | 2025-06-16T06:46:27Z | 0 | 0 | null | [
"safetensors",
"financial",
"credit-rating",
"korean",
"gemma",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:46:19Z | ---
language: ko
license: apache-2.0
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- financial
- credit-rating
- korean
- gemma
- unsloth
- fine-tuned
model_name: FinCreditLlama-3.2-3B
pipeline_tag: text-generation
---
# FinCreditLlama-3.2-3B
## ๋ชจ๋ธ ๊ฐ์
FinCreditLlama-3.2-3B๋ ๊ธ์ต ์ ์ฉ ํ๊ฐ๋ฅผ ์ํด ํน๋ณํ ์ค๊ณ๋ ํ๊ตญ์ด ์ธ์ด ๋ชจ๋ธ์
๋๋ค.
**๋ฒ ์ด์ค ๋ชจ๋ธ**: unsloth/Llama-3.2-3B-Instruct
**๋ฐ์ดํฐ์
**: himedia/financial_dummy_data_v2
**ํ์ต ๋ฐฉ๋ฒ**: LoRA (Low-Rank Adaptation)
**ํ์ต ์ผ์**: 20250616_064455
## ํ์ดํผํ๋ผ๋ฏธํฐ
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455")
# ๊ฐ๋จํ ์ถ๋ก ์์
prompt = "๊ณ ๊ฐ์ ์ ์ฉ๋ฑ๊ธ์ ํ๊ฐํด์ฃผ์ธ์:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ๋ ํฌ์งํ ๋ฆฌ๋ช
๊ตฌ์ฑ
```
fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_064455
```
- `fincredit-gemma3-4b`: ๋ชจ๋ธ ๊ธฐ๋ณธ๋ช
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: ํ์ต ์คํ
- `20250616_064455`: ํ์ต ์๊ฐ
## ์ฑ๋ฅ
์ด ๋ชจ๋ธ์ ํ๊ตญ์ด ๊ธ์ต ํ
์คํธ์ ๋ํด ํ์ธํ๋๋์ด ์ ์ฉ ํ๊ฐ ๊ด๋ จ ์ง์์๋ต์ ํนํ๋์ด ์์ต๋๋ค.
## ๋ผ์ด์ ์ค
Apache 2.0
|
John6666/noobai-xl-nai-xl-v-pred-colorfixed-v10-sdxl | John6666 | 2025-06-16T06:43:09Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"colorfix",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:36:52Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- colorfix
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1672827?modelVersionId=1893403).
This model created by [Volnovik](https://civitai.com/user/Volnovik).
|
P0L3/cliscibert_scivocab_uncased | P0L3 | 2025-06-16T06:42:22Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"climate-change",
"domain-adaptation",
"masked-language-modeling",
"scientific-nlp",
"transformer",
"BERT",
"SciBERT",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-03-21T09:05:31Z | ---
language: en
license: mit
library_name: transformers
tags:
- climate-change
- domain-adaptation
- masked-language-modeling
- scientific-nlp
- transformer
- BERT
- SciBERT
metrics:
- f1
model-index:
- name: CliSciBERT
results:
- task:
type: text-classification
name: Climate NLP Tasks (ClimaBench)
dataset:
name: ClimaBench
type: benchmark
metrics:
- type: f1
name: Macro F1 (avg)
value: 60.502
---
# CliSciBERT ๐ฟ๐
**CliSciBERT** is a domain-adapted version of [**SciBERT**](https://huggingface.co/allenai/scibert_scivocab_uncased), further pretrained on a curated corpus of peer-reviewed research papers in the climate change domain. It is designed to enhance performance on climate-focused scientific NLP tasks by adapting the general scientific knowledge of SciBERT to the specialized subdomain of climate research.
## ๐ Overview
- **Base Model**: SciBERT (BERT-base architecture, scientific vocab)
- **Pretraining Method**: Continued pretraining (domain adaptation) using Masked Language Modeling (MLM)
- **Corpus**: Scientific papers focused on climate change and environmental science
- **Tokenizer**: SciBERT tokenizer (unchanged)
- **Language**: English
- **Domain**: Climate change research
## ๐ Performance
Evaluated on **ClimaBench**, a benchmark for climate-focused NLP tasks:
| Metric | Value |
|----------------|--------------|
| Macro F1 (avg) | 60.50|
| Tasks won | 0/7|
| Avg. Std Dev | 0.01772|
Note: While CliSciBERT builds on SciBERTโs scientific grounding, its domain specialization improves relevance for climate-related NLP tasks.
Climate performance model card:
|CliSciBERT||
|---------------------------------|-----------------------------|
| 1. Model publicly available? | Yes |
| 2. Time to train final model | 463h |
| 3. Time for all experiments | 1,226h ~ 51 days |
| 4. Power of GPU and CPU | 0.250 kW + 0.013 kW |
| 5. Location for computations | Croatia |
| 6. Energy mix at location | 224.71 gCO<sub>2</sub>eq/kWh |
| 7. CO$_2$eq for final model | 28 kg CO<sub>2</sub> |
| 8. CO$_2$eq for all experiments | 74 kg CO<sub>2</sub> |
## ๐งช Intended Uses
**Use for:**
- Scientific text classification and relation extraction in climate change literature
- Domain-specific document tagging or summarization
- Supporting knowledge graph population for climate research
**Not recommended for:**
- Non-climate or general news content
- Non-English corpora
- Highly informal or colloquial text
Example:
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
import torch
# Load the pretrained model and tokenizer
model_name = "P0L3/clirebert_clirevocab_uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
# Move model to GPU if available
device = 0 if torch.cuda.is_available() else -1
# Create a fill-mask pipeline
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer, device=device)
# Example input from scientific climate literature
text = "The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth."
# Run prediction
predictions = fill_mask(text)
# Show top predictions
print(text)
print(10*">")
for p in predictions:
print(f"{p['sequence']} โ {p['score']:.4f}")
```
Output:
``` shell
The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth.
>>>>>>>>>>
the increase in greenhouse gas ... affected the energy balance of the earth. โ 0.3911
the increase in greenhouse gas ... affected the radiative balance of the earth. โ 0.2640
the increase in greenhouse gas ... affected the radiation balance of the earth. โ 0.1233
the increase in greenhouse gas ... affected the carbon balance of the earth. โ 0.0589
the increase in greenhouse gas ... affected the ecological balance of the earth. โ 0.0332
```
## โ ๏ธ Limitations
- Retains SciBERTโs limitations outside the scientific domain
- May inherit biases from climate change literature
- No tokenizer retraining โ tokenization optimized for general science, not climate-specific vocabulary
## ๐งพ Citation
If you use this model, please cite:
```bibtex
@article{poleksic_etal_2025,
title={Climate Research Domain BERTs: Pretraining, Adaptation, and Evaluation},
author={Poleksiฤ, Andrija and
Martinฤiฤ-Ipลกiฤ, Sanda},
journal={PREPRINT (Version 1)},
year={2025},
doi={https://doi.org/10.21203/rs.3.rs-6644722/v1}
}
|
BWComedian/CSM-1B | BWComedian | 2025-06-16T06:42:12Z | 0 | 0 | null | [
"gpt2",
"conversational",
"maya",
"en",
"dataset:custom-maya-dialogues",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2025-06-16T05:15:02Z | ---
language:
- "en"
thumbnail: "https://huggingface.co/BWComedian/CSM-1B/resolve/main/thumbnail.png"
tags:
- conversational
- maya
license: "mit"
datasets:
- custom-maya-dialogues
metrics:
- perplexity
base_model: "gpt2"
---
# CSM-1B Maya Model
This is the CSM-1B Maya-like conversational model.
## Description
A large language model trained for conversational AI, designed to simulate Maya.
## Usage
You can interact with this model via the Hugging Face Spaces app or load it using custom scripts.
## License
MIT License
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.05_epoch1 | MinaMila | 2025-06-16T06:40:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:38:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/visionOCR-3B-061125 | prithivMLmods | 2025-06-16T06:38:00Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"OCR",
"Receipt",
"VisionOCR",
"Messy Handwriting OCR",
"conversational",
"en",
"zh",
"dataset:linxy/LaTeX_OCR",
"dataset:mychen76/ds_receipts_v2_eval",
"dataset:mychen76/invoices-and-receipts_ocr_v1",
"dataset:prithivMLmods/Latex-KIE",
"arxiv:2412.08746",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"arxiv:2412.02210",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-11T15:09:35Z | ---
license: apache-2.0
language:
- en
- zh
tags:
- text-generation-inference
- OCR
- Receipt
- VisionOCR
- Messy Handwriting OCR
datasets:
- linxy/LaTeX_OCR
- mychen76/ds_receipts_v2_eval
- mychen76/invoices-and-receipts_ocr_v1
- prithivMLmods/Latex-KIE
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
---

# **visionOCR-3B-061125**
> The **visionOCR-3B-061125** model is a fine-tuned version of **Qwen/Qwen2.5-VL-3B-Instruct**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.
# Key Enhancements
* **Advanced Document-Level OCR**: Capable of extracting structured content from complex, multi-page documents such as invoices, academic papers, forms, and scanned reports.
* **Enhanced Long-Context Vision-Language Understanding**: Designed to handle dense document layouts, long sequences of embedded text, tables, and diagrams with coherent cross-reference understanding.
* **State-of-the-Art Performance Across Resolutions**: Achieves competitive results on OCR and visual QA benchmarks such as DocVQA, MathVista, RealWorldQA, and MTVQA.
* **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for content summarization, Q\&A, and multi-modal reasoning.
* **Visually-Grounded Device Interaction**: Enables mobile/robotic device operation via visual inputs and text-based instructions using contextual understanding and decision-making logic.
# Quick Start with Transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/visionOCR-3B-061125", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/visionOCR-3B-061125")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Intended Use
This model is intended for:
* High-fidelity OCR from documents, forms, receipts, and printed or scanned materials.
* Image and document-based question answering for educational and enterprise applications.
* Extraction and LaTeX formatting of mathematical expressions from printed or handwritten content.
* Retrieval and summarization from long documents, slides, and multi-modal inputs.
* Multilingual OCR and structured content extraction for global use cases.
* Robotic or mobile automation with vision-guided contextual interaction.
# Limitations
* May show degraded performance on extremely low-quality or occluded images.
* Not optimized for real-time applications on low-resource or edge devices due to computational demands.
* Variable accuracy on uncommon or low-resource languages/scripts.
* Long video processing may require substantial memory and is not optimized for streaming applications.
* Visual token settings affect performance; suboptimal configurations can impact results.
* In rare cases, outputs may contain hallucinated or contextually misaligned information.
## References
* **DocVLM: Make Your VLM an Efficient Reader**
[https://arxiv.org/pdf/2412.08746v1](https://arxiv.org/pdf/2412.08746v1)
* **YaRN: Efficient Context Window Extension of Large Language Models**
[https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)
* **Qwen2-VL: Enhancing Vision-Language Modelโs Perception of the World at Any Resolution**
[https://arxiv.org/pdf/2409.12191](https://arxiv.org/pdf/2409.12191)
* **Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond**
[https://arxiv.org/pdf/2308.12966](https://arxiv.org/pdf/2308.12966)
* **A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy**
[https://arxiv.org/pdf/2412.02210](https://arxiv.org/pdf/2412.02210) |
dgambettaphd/M_llm2_run2_gen10_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | 2025-06-16T06:37:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:37:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/magnumspell-v10-sdxl | John6666 | 2025-06-16T06:36:50Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"mature",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:31:09Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- mature
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1685170/magnumspell?modelVersionId=1907273).
This model created by [Dark_Schneider](https://civitai.com/user/Dark_Schneider).
|
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF | Triangle104 | 2025-06-16T06:36:03Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T06:33:11Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as โVenice Uncensored,โ the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -c 2048
```
|
ka-ops/Meta-Llama-3.1-8B-Instruct-FP8 | ka-ops | 2025-06-16T06:35:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"fp8",
"vllm",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"compressed-tensors",
"region:us"
] | text-generation | 2025-06-16T06:24:00Z | ---
tags:
- fp8
- vllm
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
---
# Meta-Llama-3.1-8B-Instruct-FP8
## Model Overview
- **Model Architecture:** Meta-Llama-3.1
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- **Release Date:** 7/23/2024
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
- **Model Developers:** Neural Magic
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It achieves an average score of 73.44 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.79.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) to FP8 data type, ready for inference with vLLM built from source.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
[LLM Compressor](https://github.com/vllm-project/llm-compressor) is used for quantization with 512 sequences of UltraChat.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created by applying [LLM Compressor with calibration samples from UltraChat](https://github.com/vllm-project/llm-compressor/blob/sa/big_model_support/examples/big_model_offloading/big_model_w8a8_calibrate.py), as presented in the code snipet below.
```python
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
from llmcompressor.transformers.compression.helpers import (
calculate_offload_device_map,
custom_offload_device_map,
)
recipe = """
quant_stage:
quant_modifiers:
QuantizationModifier:
ignore: ["lm_head"]
config_groups:
group_0:
weights:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
input_activations:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
targets: ["Linear"]
"""
model_stub = "meta-llama/Meta-Llama-3.1-8B-Instruct"
model_name = model_stub.split("/")[-1]
device_map = calculate_offload_device_map(
model_stub, reserve_for_hessians=False, num_gpus=1, torch_dtype="auto"
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_stub, torch_dtype="auto", device_map=device_map
)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
output_dir = f"./{model_name}-FP8"
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 4096
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
oneshot(
model=model,
output_dir=output_dir,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
save_compressed=True,
)
```
## Evaluation
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct </strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct-FP8(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>67.95
</td>
<td>67.97
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>MMLU-cot (0-shot)
</td>
<td>71.24
</td>
<td>71.12
</td>
<td>99.83%
</td>
</tr>
<tr>
<td>ARC Challenge (0-shot)
</td>
<td>82.00
</td>
<td>81.66
</td>
<td>99.59%
</td>
</tr>
<tr>
<td>GSM-8K-cot (8-shot, strict-match)
</td>
<td>81.96
</td>
<td>81.12
</td>
<td>98.98%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>80.46
</td>
<td>80.4
</td>
<td>99.93%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>78.45
</td>
<td>77.90
</td>
<td>99.30%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>54.50
</td>
<td>53.92
</td>
<td>98.94%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>73.79</strong>
</td>
<td><strong>73.44</strong>
</td>
<td><strong>99.52%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following commands:
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU-cot
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### ARC-Challenge
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### GSM-8K
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--apply_chat_template \
--fewshot_as_multiturn \
--num_fewshot 8 \
--batch_size auto
```
#### Hellaswag
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
```
#### Winogrande
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
```
#### TruthfulQA
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
``` |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.15_epoch2 | MinaMila | 2025-06-16T06:33:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:31:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrpeerat/new_model | mrpeerat | 2025-06-16T06:32:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:aisingapore/Gemma-SEA-LION-v3-9B",
"base_model:merge:aisingapore/Gemma-SEA-LION-v3-9B",
"base_model:aisingapore/Gemma-SEA-LION-v3-9B-IT",
"base_model:merge:aisingapore/Gemma-SEA-LION-v3-9B-IT",
"base_model:google/gemma-2-9b-it",
"base_model:merge:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:29:10Z | ---
base_model:
- aisingapore/Gemma-SEA-LION-v3-9B
- aisingapore/Gemma-SEA-LION-v3-9B-IT
- google/gemma-2-9b-it
library_name: transformers
tags:
- mergekit
- merge
---
# linear_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method using [aisingapore/Gemma-SEA-LION-v3-9B-IT](https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B-IT) as a base.
### Models Merged
The following models were included in the merge:
* /mnt/weka/aisg/peerat/LLaMA-Factory/Wangchanlion-gemma2-wangchanxFull-Syn120k-1e4-full
* [aisingapore/Gemma-SEA-LION-v3-9B](https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B)
* [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: aisingapore/Gemma-SEA-LION-v3-9B
parameters:
weight: 1.0
density: 1
- model: /mnt/weka/aisg/peerat/LLaMA-Factory/Wangchanlion-gemma2-wangchanxFull-Syn120k-1e4-full
parameters:
weight: 1.0
density: 1
- model: google/gemma-2-9b-it
parameters:
weight: 1.0
density: 1
merge_method: linear
base_model: aisingapore/Gemma-SEA-LION-v3-9B-IT
parameters:
# t: [0, 0.5, 1, 0.5, 0]
weight: 1.0
density: 1
normalize: true
int8_mask: true
tokenizer:
source: aisingapore/Gemma-SEA-LION-v3-9B-IT
dtype: bfloat16
```
|
MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale | MalvinasMan | 2025-06-16T06:31:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slimy shrewd whale",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T15:17:12Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slimy shrewd whale
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
prithivMLmods/Ross-640-BMath-1.5B-GGUF | prithivMLmods | 2025-06-16T06:30:52Z | 215 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"math",
"text-generation",
"en",
"base_model:prithivMLmods/Ross-640-BMath-1.5B",
"base_model:quantized:prithivMLmods/Ross-640-BMath-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T12:19:02Z | ---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Ross-640-BMath-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
---
# **Ross-640-BMath-1.5B-GGUF**
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Ross-640-BMath-1.5B.F32.gguf | 6.18 GB | F32 | Full precision 32-bit floating point |
| Ross-640-BMath-1.5B.F16.gguf | 3.09 GB | F16 | Half precision 16-bit floating point |
| Ross-640-BMath-1.5B.BF16.gguf | 3.09 GB | BF16 | Brain floating point 16-bit |
| Ross-640-BMath-1.5B.Q8_0.gguf | 1.65 GB | Q8_0 | 8-bit quantized |
| Ross-640-BMath-1.5B.Q6_K.gguf | 1.27 GB | Q6_K | 6-bit quantized |
| Ross-640-BMath-1.5B.Q5_K_M.gguf | 1.13 GB | Q5_K_M | 5-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q5_K_S.gguf | 1.1 GB | Q5_K_S | 5-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q4_K_M.gguf | 986 MB | Q4_K_M | 4-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q4_K_S.gguf | 940 MB | Q4_K_S | 4-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q3_K_L.gguf | 880 MB | Q3_K_L | 3-bit quantized, large quality |
| Ross-640-BMath-1.5B.Q3_K_M.gguf | 824 MB | Q3_K_M | 3-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q3_K_S.gguf | 761 MB | Q3_K_S | 3-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q2_K.gguf | 676 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824 | himedia | 2025-06-16T06:30:44Z | 0 | 0 | null | [
"safetensors",
"financial",
"credit-rating",
"korean",
"gemma",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:30:36Z | ---
language: ko
license: apache-2.0
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- financial
- credit-rating
- korean
- gemma
- unsloth
- fine-tuned
model_name: FinCreditLlama-3.2-3B
pipeline_tag: text-generation
---
# FinCreditLlama-3.2-3B
## ๋ชจ๋ธ ๊ฐ์
FinCreditLlama-3.2-3B๋ ๊ธ์ต ์ ์ฉ ํ๊ฐ๋ฅผ ์ํด ํน๋ณํ ์ค๊ณ๋ ํ๊ตญ์ด ์ธ์ด ๋ชจ๋ธ์
๋๋ค.
**๋ฒ ์ด์ค ๋ชจ๋ธ**: unsloth/Llama-3.2-3B-Instruct
**๋ฐ์ดํฐ์
**: himedia/financial_dummy_data_v2
**ํ์ต ๋ฐฉ๋ฒ**: LoRA (Low-Rank Adaptation)
**ํ์ต ์ผ์**: 20250616_062824
## ํ์ดํผํ๋ผ๋ฏธํฐ
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824")
# ๊ฐ๋จํ ์ถ๋ก ์์
prompt = "๊ณ ๊ฐ์ ์ ์ฉ๋ฑ๊ธ์ ํ๊ฐํด์ฃผ์ธ์:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ๋ ํฌ์งํ ๋ฆฌ๋ช
๊ตฌ์ฑ
```
fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824
```
- `fincredit-gemma3-4b`: ๋ชจ๋ธ ๊ธฐ๋ณธ๋ช
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: ํ์ต ์คํ
- `20250616_062824`: ํ์ต ์๊ฐ
## ์ฑ๋ฅ
์ด ๋ชจ๋ธ์ ํ๊ตญ์ด ๊ธ์ต ํ
์คํธ์ ๋ํด ํ์ธํ๋๋์ด ์ ์ฉ ํ๊ฐ ๊ด๋ จ ์ง์์๋ต์ ํนํ๋์ด ์์ต๋๋ค.
## ๋ผ์ด์ ์ค
Apache 2.0
|
yukinoshitawebid/shortlink | yukinoshitawebid | 2025-06-16T06:29:39Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-06-09T10:15:34Z | ---
license: mit
---
https://short.sampangkab.go.id/Home
https://uinkhas.id/index.php?q=vhxwni
https://whyvpn.my.id/r?code=SL2506090001
https://v-online.id/f0f94920
https://doodster001.web.id/e/?id=iNSYacMWq
https://anichin.kee.my.id/
https://ddoodd.biz.id/f314b7
https://tlkm.id/bhtEqikdszUNzsp
https://2ur.jp/bjP1
http://www.teknofull.com.tr/#s=cwJ8KjseGBJ9KjskmLO0bLF4Gw17v7bycETomR5tb8lavZCyv8TkGfghGqKrFjYtFVWtW819bZegGENrGRXgb7OsQEOaWw0eFVMumROgGRQtvZ2ubLJtQfgiGfl0nRrkxP%3D%3D
https://fundn.eu/jitbl |
irqol123/m | irqol123 | 2025-06-16T06:27:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T06:27:55Z | ---
license: apache-2.0
---
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.15_epoch1 | MinaMila | 2025-06-16T06:27:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:25:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
talha2001/textual_model_llama | talha2001 | 2025-06-16T06:26:02Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-12-10T18:54:04Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** talha2001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jobz-Hunting-Pakistan-Viral-videos/VIDEO.Imsha.Rehman.Viral.Video.Original | Jobz-Hunting-Pakistan-Viral-videos | 2025-06-16T06:25:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:25:44Z | [](https://tinyurl.com/3cf32han) |
mlx-community/Jan-nano-8bit | mlx-community | 2025-06-16T06:23:32Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Menlo/Jan-nano",
"base_model:quantized:Menlo/Jan-nano",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-06-16T06:19:04Z | ---
license: apache-2.0
base_model: Menlo/Jan-nano
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# mlx-community/Jan-nano-8bit
This model [mlx-community/Jan-nano-8bit](https://huggingface.co/mlx-community/Jan-nano-8bit) was
converted to MLX format from [Menlo/Jan-nano](https://huggingface.co/Menlo/Jan-nano)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Jan-nano-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
onnx-community/indicwav2vec-hindi-ONNX | onnx-community | 2025-06-16T06:22:33Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"wav2vec2",
"automatic-speech-recognition",
"base_model:ai4bharat/indicwav2vec-hindi",
"base_model:quantized:ai4bharat/indicwav2vec-hindi",
"region:us"
] | automatic-speech-recognition | 2025-06-16T06:22:10Z | ---
library_name: transformers.js
base_model:
- ai4bharat/indicwav2vec-hindi
---
# indicwav2vec-hindi (ONNX)
This is an ONNX version of [ai4bharat/indicwav2vec-hindi](https://huggingface.co/ai4bharat/indicwav2vec-hindi). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.25_epoch2 | MinaMila | 2025-06-16T06:20:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:18:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits