modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-25 18:27:02
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-25 18:24:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ujjwal1996/Fine_tuning_unsloth-DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit_70steps | ujjwal1996 | 2025-05-22T04:23:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T07:54:00Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ujjwal1996
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
martinaianaro99/ViLT_ft_CG_L2_F | martinaianaro99 | 2025-05-22T04:23:43Z | 0 | 0 | null | [
"safetensors",
"vilt",
"region:us"
] | null | 2025-05-12T12:09:46Z | # ViLT Model fine-tuned on CG_L2_F dataset
Model checkpoint from epoch 10.
## Usage
```python
from transformers import ViltProcessor, ViltForMaskedLM
# Load model and processor
processor = ViltProcessor.from_pretrained('martinaianaro99/ViLT_ft_CG_L2_F')
model = ViltForMaskedLM.from_pretrained('martinaianaro99/ViLT_ft_CG_L2_F')
```
|
Luigi112001/llama4-finetune | Luigi112001 | 2025-05-22T04:21:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-v0.3-bnb-4bit",
"region:us"
] | null | 2025-05-22T04:21:51Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Random-Role-0522-Zichen-step_00256 | the-acorn-ai | 2025-05-22T04:21:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T04:18:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Random-Role-0522-Zichen-step_00224 | the-acorn-ai | 2025-05-22T04:18:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T04:15:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SamuelAIA/nanoVLM | SamuelAIA | 2025-05-22T04:15:40Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-22T04:14:59Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("SamuelAIA/nanoVLM")
```
|
wzhgba/opendwm-models | wzhgba | 2025-05-22T04:15:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T04:15:09Z | ---
license: apache-2.0
---
|
DanielNRU/pollen-ner2-850 | DanielNRU | 2025-05-22T04:10:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T04:03:49Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-850
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-850
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Precision: 0.7687
- Recall: 0.8474
- F1: 0.8061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 107 | 0.2349 | 0.7333 | 0.8394 | 0.7828 |
| No log | 2.0 | 214 | 0.2238 | 0.7473 | 0.8434 | 0.7925 |
| No log | 3.0 | 321 | 0.2151 | 0.7680 | 0.8373 | 0.8012 |
| No log | 4.0 | 428 | 0.2206 | 0.7536 | 0.8414 | 0.7951 |
| 0.4882 | 5.0 | 535 | 0.2169 | 0.7687 | 0.8474 | 0.8061 |
| 0.4882 | 6.0 | 642 | 0.2211 | 0.7518 | 0.8454 | 0.7958 |
| 0.4882 | 7.0 | 749 | 0.2176 | 0.7608 | 0.8494 | 0.8027 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
CNMA/CNMA23 | CNMA | 2025-05-22T04:09:37Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T04:09:37Z | ---
license: apache-2.0
---
|
harshithan/fb-post-classifier-roberta_v1 | harshithan | 2025-05-22T04:08:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"facebook",
"sentiment",
"customer-support",
"huggingface",
"fine-tuned",
"en",
"dataset:custom",
"base_model:cardiffnlp/twitter-roberta-base",
"base_model:finetune:cardiffnlp/twitter-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T00:26:40Z | ---
license: mit
language:
- en
metrics:
- f1
- accuracy
base_model:
- cardiffnlp/twitter-roberta-base
datasets:
- custom
tags:
- facebook
- text-classification
- sentiment
- customer-support
- transformers
- roberta
- huggingface
- fine-tuned
model-index:
- name: fb-post-classifier-roberta
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: Facebook Posts (Appreciation / Complaint / Feedback)
type: custom
metrics:
- name: F1
type: f1
value: 0.8979
library_name: transformers
pipeline_tag: text-classification
---
# Facebook Post Classifier (RoBERTa Base, fine-tuned)
This model classifies short Facebook posts into **one** of the following **three mutually exclusive categories**:
- `Appreciation`
- `Complaint`
- `Feedback`
It is fine-tuned on ~8k manually labeled posts from business pages (e.g. Target, Walmart), based on the `cardiffnlp/twitter-roberta-base` model, which is pretrained on 58M tweets.
## 🧠 Intended Use
- Customer support automation
- Sentiment analysis on social media
- CRM pipelines or chatbot classification
## 📊 Performance
| Class | Precision | Recall | F1 Score |
|--------------|-----------|--------|----------|
| Appreciation | 0.906 | 0.936 | 0.921 |
| Complaint | 0.931 | 0.902 | 0.916 |
| Feedback | 0.840 | 0.874 | 0.857 |
| **Average** | – | – | **0.898** |
> Evaluated on 2039 unseen posts with held-out labels using macro-averaged F1.
## 🛠️ How to Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn.functional import softmax
import torch
model = AutoModelForSequenceClassification.from_pretrained("harshithan/fb-post-classifier-roberta_v1")
tokenizer = AutoTokenizer.from_pretrained("harshithan/fb-post-classifier-roberta_v1")
inputs = tokenizer("I love the fast delivery!", return_tensors="pt")
outputs = model(**inputs)
probs = softmax(outputs.logits, dim=1)
label = torch.argmax(probs).item()
classes = ["Appreciation", "Complaint", "Feedback"]
print("Predicted:", classes[label])
```
## 🧾 License
MIT License
## 🙋♀️ Author
This model was fine-tuned by @harshithan.
## 📚 Academic Disclaimer
This model was developed as part of an academic experimentation project. It is intended solely for educational and research purposes.
The model has not been validated for production use and may not generalize to real-world Facebook or customer support data beyond the scope of the assignment.
|
kittuitsue/xcvxcv | kittuitsue | 2025-05-22T04:05:56Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-22T04:05:56Z | ---
license: creativeml-openrail-m
---
|
suringrepell/xcvzxcv | suringrepell | 2025-05-22T04:05:54Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-22T04:05:54Z | ---
license: bigscience-openrail-m
---
|
DanielNRU/pollen-ner2-800 | DanielNRU | 2025-05-22T04:03:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:57:09Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-800
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2265
- Precision: 0.7546
- Recall: 0.8273
- F1: 0.7893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 100 | 0.2447 | 0.7059 | 0.8193 | 0.7584 |
| No log | 2.0 | 200 | 0.2398 | 0.7180 | 0.8233 | 0.7671 |
| No log | 3.0 | 300 | 0.2361 | 0.7326 | 0.8253 | 0.7762 |
| No log | 4.0 | 400 | 0.2313 | 0.7406 | 0.8313 | 0.7833 |
| 0.5116 | 5.0 | 500 | 0.2265 | 0.7546 | 0.8273 | 0.7893 |
| 0.5116 | 6.0 | 600 | 0.2334 | 0.7220 | 0.8293 | 0.7720 |
| 0.5116 | 7.0 | 700 | 0.2255 | 0.7446 | 0.8313 | 0.7856 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
chancharikm/qwen2.5-vl-72b-cam-motion-preview | chancharikm | 2025-05-22T04:02:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"video-text-to-text",
"arxiv:2404.01291",
"arxiv:2504.15376",
"base_model:Qwen/Qwen2.5-VL-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-72B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | video-text-to-text | 2025-05-22T00:46:27Z | ---
base_model: Qwen/Qwen2.5-VL-72B-Instruct
library_name: transformers
license: other
tags:
- llama-factory
- full
- generated_from_trainer
pipeline_tag: video-text-to-text
model-index:
- name: bal_imb_cap_full_lr2e-4_epoch10.0_freezevisTrue_fps8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) on the current most, high-quality camera motion dataset that is publically available. This preview model is the current SOTA for classifying camera motion or being used for video-text retrieval with camera motion captions using [VQAScore](https://arxiv.org/pdf/2404.01291). Find more information about our work on our Github page for [CameraBench](https://github.com/sy77777en/CameraBench). *More updates to the benchmark and models will come in the future. Stay tuned!*
## Intended uses & limitations
The usage is identical to a [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) model. Our model is primarily useful for camera motion classification in videos as well as video-text retrieval (current SOTA in both tasks).
**A quick demo is shown below:**
<details>
<summary>Generative Scoring (for classification and retrieval):</summary>
```python
# Import necessary libraries
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# Load the model
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"chancharikm/qwen2.5-vl-72B-cam-motion-preview", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-72B-Instruct")
# Prepare input data
video_path = "file:///path/to/video1.mp4"
text_description = "the camera tilting upward"
question = f"Does this video show \"{text_description}\"?"
# Format the input for the model
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": video_path,
"fps": 8.0, # Recommended FPS for optimal inference
},
{"type": "text", "text": question},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs
)
inputs = inputs.to("cuda")
# Generate with score output
with torch.inference_mode():
outputs = model.generate(
**inputs,
max_new_tokens=1,
do_sample=False, # Use greedy decoding to get reliable logprobs
output_scores=True,
return_dict_in_generate=True
)
# Calculate probability of "Yes" response
scores = outputs.scores[0]
probs = torch.nn.functional.softmax(scores, dim=-1)
yes_token_id = processor.tokenizer.encode("Yes")[0]
score = probs[0, yes_token_id].item()
print(f"Video: {video_path}")
print(f"Description: '{text_description}'")
print(f"Score: {score:.4f}")
```
</details>
<details>
<summary>Natural Language Generation</summary>
```python
# The model is trained on 8.0 FPS which we recommend for optimal inference
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"chancharikm/qwen2.5-vl-72B-cam-motion-preview", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "chancharikm/qwen2.5-vl-72B-cam-motion-preview",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-72B-Instruct")
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"fps": 8.0,
},
{"type": "text", "text": "Describe the camera motion in this video."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
fps=fps,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
## Training and evaluation data
Training and evaluation data can be found in our [repo](https://github.com/sy77777en/CameraBench).
## ✏️ Citation
If you find this repository useful for your research, please use the following.
```
@article{lin2025camerabench,
title={Towards Understanding Camera Motions in Any Video},
author={Lin, Zhiqiu and Cen, Siyuan and Jiang, Daniel and Karhade, Jay and Wang, Hewei and Mitra, Chancharik and Ling, Tiffany and Huang, Yuhan and Liu, Sifan and Chen, Mingyu and Zawar, Rushikesh and Bai, Xue and Du, Yilun and Gan, Chuang and Ramanan, Deva},
journal={arXiv preprint arXiv:2504.15376},
year={2025},
}
``` |
chancharikm/qwen2.5-vl-32b-cam-motion-preview | chancharikm | 2025-05-22T04:01:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"video-text-to-text",
"arxiv:2404.01291",
"arxiv:2504.15376",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | video-text-to-text | 2025-05-22T00:45:03Z | ---
base_model: Qwen/Qwen2.5-VL-32B-Instruct
library_name: transformers
license: other
tags:
- llama-factory
- full
- generated_from_trainer
pipeline_tag: video-text-to-text
model-index:
- name: bal_imb_cap_full_lr2e-4_epoch10.0_freezevisTrue_fps8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) on the current most, high-quality camera motion dataset that is publically available. This preview model is the current SOTA for classifying camera motion or being used for video-text retrieval with camera motion captions using [VQAScore](https://arxiv.org/pdf/2404.01291). Find more information about our work on our Github page for [CameraBench](https://github.com/sy77777en/CameraBench). *More updates to the benchmark and models will come in the future. Stay tuned!*
## Intended uses & limitations
The usage is identical to a [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) model. Our model is primarily useful for camera motion classification in videos as well as video-text retrieval (current SOTA in both tasks).
**A quick demo is shown below:**
<details>
<summary>Generative Scoring (for classification and retrieval):</summary>
```python
# Import necessary libraries
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# Load the model
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"chancharikm/qwen2.5-vl-32B-cam-motion-preview", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-32B-Instruct")
# Prepare input data
video_path = "file:///path/to/video1.mp4"
text_description = "the camera tilting upward"
question = f"Does this video show \"{text_description}\"?"
# Format the input for the model
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": video_path,
"fps": 8.0, # Recommended FPS for optimal inference
},
{"type": "text", "text": question},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs
)
inputs = inputs.to("cuda")
# Generate with score output
with torch.inference_mode():
outputs = model.generate(
**inputs,
max_new_tokens=1,
do_sample=False, # Use greedy decoding to get reliable logprobs
output_scores=True,
return_dict_in_generate=True
)
# Calculate probability of "Yes" response
scores = outputs.scores[0]
probs = torch.nn.functional.softmax(scores, dim=-1)
yes_token_id = processor.tokenizer.encode("Yes")[0]
score = probs[0, yes_token_id].item()
print(f"Video: {video_path}")
print(f"Description: '{text_description}'")
print(f"Score: {score:.4f}")
```
</details>
<details>
<summary>Natural Language Generation</summary>
```python
# The model is trained on 8.0 FPS which we recommend for optimal inference
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"chancharikm/qwen2.5-vl-32B-cam-motion-preview", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "chancharikm/qwen2.5-vl-32B-cam-motion-preview",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-32B-Instruct")
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"fps": 8.0,
},
{"type": "text", "text": "Describe the camera motion in this video."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
fps=fps,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
## Training and evaluation data
Training and evaluation data can be found in our [repo](https://github.com/sy77777en/CameraBench).
## ✏️ Citation
If you find this repository useful for your research, please use the following.
```
@article{lin2025camerabench,
title={Towards Understanding Camera Motions in Any Video},
author={Lin, Zhiqiu and Cen, Siyuan and Jiang, Daniel and Karhade, Jay and Wang, Hewei and Mitra, Chancharik and Ling, Tiffany and Huang, Yuhan and Liu, Sifan and Chen, Mingyu and Zawar, Rushikesh and Bai, Xue and Du, Yilun and Gan, Chuang and Ramanan, Deva},
journal={arXiv preprint arXiv:2504.15376},
year={2025},
}
``` |
auslawbench/Re-ranker-SaulLM-7B | auslawbench | 2025-05-22T03:58:33Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"en",
"arxiv:2412.06272",
"base_model:Equall/Saul-7B-Base",
"base_model:finetune:Equall/Saul-7B-Base",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T02:51:30Z | ---
library_name: transformers
license: cc-by-4.0
language:
- en
base_model:
- Equall/Saul-7B-Base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Ehsan Shareghi, Jiuzhou Han, Paul Burgess
- **Model type:** 7B
- **Language(s) (NLP):** English
- **License:** CC BY 4.0
- **Finetuned from model:** Saul-7B-Base
### Model Sources
<!-- Provide the basic links for the model. -->
- **Paper:** https://arxiv.org/pdf/2412.06272
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Here's how you can run the model:
```python
# pip install git+https://github.com/huggingface/transformers.git
# pip install git+https://github.com/huggingface/peft.git
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig
)
from peft import PeftModel
model = AutoModelForCausalLM.from_pretrained(
"Equall/Saul-7B-Base",
quantization_config=BitsAndBytesConfig(load_in_8bit=True),
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Equall/Saul-7B-Base")
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(
model,
"auslawbench/Re-ranker-SaulLM-7B",
device_map="auto",
torch_dtype=torch.bfloat16,
)
model.eval()
fine_tuned_prompt = """
### Instruction:
{}
### Input:
{}
### Response:
{}"""
example_input="\nText:\nMany of ZAR’s grounds of appeal related to fact finding. Drawing on principles set down in several other courts and tribunals, the Appeal Panel summarised the circumstances in which leave may be granted for a person to appeal from findings of fact: <CASENAME> at [84].\n\nPotential Citations:\n\nZNX v ZNY [2020] NSWCATAP 41\nCitation Reasons: The case ZNX v ZNY [2020] NSWCATAP 41 is cited to emphasize that the Appeal Panel's role does not include drafting grounds of appeal for an unrepresented appellant.\n\nCollins v Urban [2014] NSWCATAP 17\nCitation Reasons: The cited case, , is referenced to illustrate the principles guiding the consideration of whether leave to appeal should be granted when there are issues with a fact-finding exercise.\n\nSchwartz Family Co Pty Ltd v Capitol Carpets Pty Ltd [2017] NSWCA 223\nCitation Reasons: The cited case is referenced to emphasize the necessity of explicitly identifying the grounds of appeal, particularly in the context of an error of law in judicial review applications.\n\nNavazi v New South Wales Land and Housing Corporation [2015] NSWCA 308\nCitation Reasons: The case Navazi v New South Wales Land and Housing Corporation [2015] NSWCA 308 is cited to illustrate that the existence of a right of appeal can lead to discretionary considerations in judicial review.\n\nLloyd v Veterinary Surgeons Investigating Committee [2005] NSWCA 456\nCitation Reasons: The case of Lloyd v Veterinary Surgeons Investigating Committee is cited to illustrate that the Appeal Panel has the discretion to grant leave for appeals on questions of fact, regardless of whether an error of law has been identified.\n"
model_input = fine_tuned_prompt.format("Predict the citation in the text.", example_input, '')
inputs = tokenizer(model_input, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=1.0)
output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output.split("### Response:")[1].strip().split('>')[0] + '>')
```
## Citation
**BibTeX:**
```
@misc{shareghi2024auslawcite,
title={Methods for Legal Citation Prediction in the Age of LLMs: An Australian Law Case Study},
author={Ehsan Shareghi, Jiuzhou Han, Paul Burgess},
year={2024},
eprint={arXiv:2412.06272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Random-Role-0522-Zichen-step_00032 | the-acorn-ai | 2025-05-22T03:58:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T03:55:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DanielNRU/pollen-ner2-750 | DanielNRU | 2025-05-22T03:56:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:48:22Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-750
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-750
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2426
- Precision: 0.7264
- Recall: 0.8052
- F1: 0.7638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 94 | 0.2683 | 0.6950 | 0.7871 | 0.7382 |
| No log | 2.0 | 188 | 0.2686 | 0.6809 | 0.8012 | 0.7362 |
| No log | 3.0 | 282 | 0.2616 | 0.6961 | 0.7912 | 0.7406 |
| No log | 4.0 | 376 | 0.2646 | 0.6785 | 0.8052 | 0.7365 |
| No log | 5.0 | 470 | 0.2568 | 0.6899 | 0.8133 | 0.7465 |
| 0.5501 | 6.0 | 564 | 0.2519 | 0.7058 | 0.8092 | 0.7540 |
| 0.5501 | 7.0 | 658 | 0.2477 | 0.7072 | 0.8052 | 0.7531 |
| 0.5501 | 8.0 | 752 | 0.2426 | 0.7264 | 0.8052 | 0.7638 |
| 0.5501 | 9.0 | 846 | 0.2450 | 0.7110 | 0.8153 | 0.7596 |
| 0.5501 | 10.0 | 940 | 0.2456 | 0.7091 | 0.8173 | 0.7593 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
TEN-framework/TEN_Turn_Detection | TEN-framework | 2025-05-22T03:56:36Z | 285 | 18 | null | [
"safetensors",
"turn detection",
"conversational",
"natural language understanding",
"text-generation",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T14:55:00Z | ---
pipeline_tag: text-generation
tags:
- turn detection
- conversational
- natural language understanding
license: apache-2.0
---
# **TEN Turn Detection**
***Turn detection for full-duplex dialogue communication***
## Introduction
**TEN Turn Detection** is an advanced intelligent turn detection model designed specifically for natural and dynamic communication between humans and AI agents. This technology addresses one of the most challenging aspects of human-AI conversation: detecting natural turn-taking cues and enabling contextually-aware interruptions. TEN incorporates deep semantic understanding of conversation context and linguistic patterns to create more natural dialogue with AI.
<div align="center">
<img src="images/turn_detection.svg" alt="TEN Turn Detection SVG Diagram" width="800"/>
</div>
**TEN Turn Detection** categorizes user's text into three key states:
**finished**: A finished utterance where the user has expressed a complete thought and expects a response. Example: "Hey there I was wondering can you help me with my order"
**wait**: An ambiguous utterance where the system cannot confidently determine if more speech will follow. Example: "This conversation needs to end now"
**unfinished**: A clearly unfinished utterance where the user has momentarily paused but intends to continue speaking. Example: "Hello I have a question about"
These three classification states allow the TEN system to create natural conversation dynamics by intelligently managing turn-taking, reducing awkward interruptions while maintaining conversation flow.
TEN Turn Detection utilizes a multi-layered approach based on the transformer-based language model(Qwen2.5-7B) for semantic analysis.
## Key Features
- **Context-Aware Turn Management**
TEN Turn Detection analyzes linguistic patterns and semantic context to accurately identify turn completion points. This capability enables intelligent interruption handling, allowing the system to determine when interruptions are contextually appropriate while maintaining natural conversation flow across various dialogue scenarios.
- **Multilingual Turn Detection Support**
TEN Turn Detection provides comprehensive support for both English and Chinese languages. It is engineered to accurately identify turn-taking cues and completion signals across multilingual conversations.
- **Superior Performance**
Compared with multiple open-source solutions, TEN achieves superior performance across all metrics on our publicly available test dataset.
## Prepared Dataset
We have open-sourced the TEN-Turn-Detection TestSet, a bilingual (Chinese and English) collection of conversational inputs specifically designed to evaluate turn detection capabilities in AI dialogue systems. The dataset consists of three distinct components:
*wait.txt*: Contains expressions requesting conversation pauses or termination
*unfinished.txt*: Features incomplete dialogue inputs with truncated utterances
*finished.txt*: Provides complete conversational inputs across multiple domains
## Detection Performance
We conducted comprehensive evaluations comparing several open-source models for turn detection using our test dataset:
<div align="center">
| LANGUAGE | MODEL | FINISHED<br>ACCURACY | UNFINISHED<br>ACCURACY | WAIT<br>ACCURACY |
|:--------:|:-----:|:--------------------:|:----------------------:|:----------------:|
| English | Model A | 59.74% | 86.46% | N/A |
| English | Model B | 71.61% | 96.88% | N/A |
| English | **TEN Turn Detection** | **90.64%** | **98.44%** | **91%** |
| LANGUAGE | MODEL | FINISHED<br>ACCURACY | UNFINISHED<br>ACCURACY | WAIT<br>ACCURACY |
|:--------:|:-----:|:--------------------:|:----------------------:|:----------------:|
| Chinese | Model B | 74.63% | 88.89% | N/A |
| Chinese | **TEN Turn Detection** | **98.90%** | **92.74%** | **92%** |
</div>
> **Notes:**
> 1. Model A doesn't support Chinese language processing
> 2. Neither Model A nor Model B support the "WAIT" state detection
## Quick Start
TEN Turn Detection is also available on github [TEN-framework/ten-turn-detection](https://github.com/TEN-framework/ten-turn-detection)
### Installation
```
pip install "transformers>=4.45.0"
pip install "torch>=2.0.0"
```
### Model Weights
The TEN Turn Detection model is available on HuggingFace
### Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_id = 'TEN-framework/TEN_Turn_Detection'
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
# Move model to GPU
model = model.cuda()
model.eval()
# Function for inference
def analyze_text(text, system_prompt=""):
inf_messages = [{"role":"system", "content":system_prompt}] + [{"role":"user", "content":text}]
input_ids = tokenizer.apply_chat_template(
inf_messages,
add_generation_prompt=True,
return_tensors="pt"
).cuda()
with torch.no_grad():
outputs = model.generate(
input_ids,
max_new_tokens=1,
do_sample=True,
top_p=0.1,
temperature=0.1,
pad_token_id=tokenizer.eos_token_id
)
response = outputs[0][input_ids.shape[-1]:]
return tokenizer.decode(response, skip_special_tokens=True)
# Example usage
text = "Hello I have a question about"
result = analyze_text(text)
print(f"Input: '{text}'")
print(f"Turn Detection Result: '{result}'")
```
## Citation
If you use TEN Turn Detection in your research or applications, please cite:
```
@misc{TEN_Turn_Detection,
author = {TEN Team},
title = {TEN Turn Detection: Turn detection for full-duplex dialogue communication
},
year = {2025},
url = {https://github.com/TEN-framework/ten-turn-detection},
}
```
## License
This project is Apache 2.0 licensed. |
polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476-si-Sentiment-Tagger-DPO-Eval-7238 | polyglots | 2025-05-22T03:55:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476",
"base_model:finetune:polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T03:55:26Z | ---
base_model: polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** polyglots
- **License:** apache-2.0
- **Finetuned from model :** polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pryaosuji/zxcvzxcv | pryaosuji | 2025-05-22T03:54:23Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-22T03:54:23Z | ---
license: bigcode-openrail-m
---
|
tomwen/test2 | tomwen | 2025-05-22T03:53:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T07:43:40Z | ---
license: apache-2.0
---
|
samuelchristlie/Wan2.1-VACE-1.3B-GGUF | samuelchristlie | 2025-05-22T03:52:20Z | 0 | 0 | diffusers | [
"diffusers",
"gguf",
"video",
"video-generation",
"text-to-video",
"en",
"base_model:Wan-AI/Wan2.1-VACE-1.3B",
"base_model:quantized:Wan-AI/Wan2.1-VACE-1.3B",
"license:apache-2.0",
"region:us"
] | text-to-video | 2025-05-22T03:02:20Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-to-video
library_name: diffusers
tags:
- video
- video-generation
base_model:
- Wan-AI/Wan2.1-VACE-1.3B
---
```
________ ______ ____ ___ ___ _______ ______ _______ ____ ______ ______ _______ _______ _______ _______
| | | |.---.-.-----.|__ | |_ | ______| | | _ | | ___|_____|_ | |__ | __ \______| __| __| | | ___|
| | | || _ | || __|__ _| ||______| | | | ---| ___|______|| |_ __|__ | __ <______| | | | | | | ___|
|________||___._|__|__||______|__|______| \_____/|___|___|______|_______| |______|__|______|______/ |_______|_______|_______|___|
```
# Wan-2.1-VACE-1.3B-GGUF
## Direct GGUF Conversion of Wan2.1-VACE-1.3B
Wan2.1 is an open-source suite of video foundation models, compatible with consumer-grade GPUs, that excels in various video generation tasks like text-to-video, image-to-video, and video editing, even supporting visual text generation.
## Table of Contents 📝
1. ▶ [Usage](#usage)
2. 📃 [License](#license)
3. 🙏 [Acknowledgements](#acknowledgements)
<a name="usage"/>
## ▶ Usage
Download models using `huggingface-cli`:
```
pip install "huggingface_hub[cli]"
huggingface-cli download samuelchristlie/Wan2.1-VACE-1.3B-GGUF --local-dir ./Wan2.1-VACE-1.3B-GGUF
```
You can also download directly from [this page](https://huggingface.co/samuelchristlie/Wan2.1-VACE-1.3B-GGUF/tree/main).
<a name="license"/>
## 📃 License
This model is a derivative work of the original model licensed under the Apache 2.0 License, and is therefore distributed under the terms of the same license.
<a name="acknowledgements"/>
## 🙏 Acknowledgements
Thanks to Patrick Gillespie for creating the ASCII text art tool used in this project
https://patorjk.com/software/taag/
Wan-AI for the Wan model
https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B
https://huggingface.co/city96
</div> |
pot99rta/CaptainMaid-VioletReign-DarkMell-12B-GGUF | pot99rta | 2025-05-22T03:50:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:pot99rta/CaptainMaid-VioletReign-DarkMell-12B",
"base_model:quantized:pot99rta/CaptainMaid-VioletReign-DarkMell-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T21:25:44Z | ---
base_model: pot99rta/CaptainMaid-VioletReign-DarkMell-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# CaptainMaid-VioletReign-DarkMell-12B-GGUF

This Model Uses Mistral For The Preset.
Merge Heavy Model - Sensitive to High Temp and random settings.
This model was converted to GGUF format from [`pot99rta/CaptainMaid-VioletReign-DarkMell-12B`](https://huggingface.co/pot99rta/CaptainMaid-VioletReign-DarkMell-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/pot99rta/CaptainMaid-VioletReign-DarkMell-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo pot99rta/CaptainMaid-VioletReign-DarkMell-12B-Q8_0-GGUF --hf-file captainmaid-violetreign-darkmell-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo pot99rta/CaptainMaid-VioletReign-DarkMell-12B-Q8_0-GGUF --hf-file captainmaid-violetreign-darkmell-12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo pot99rta/CaptainMaid-VioletReign-DarkMell-12B-Q8_0-GGUF --hf-file captainmaid-violetreign-darkmell-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo pot99rta/CaptainMaid-VioletReign-DarkMell-12B-Q8_0-GGUF --hf-file captainmaid-violetreign-darkmell-12b-q8_0.gguf -c 2048
```
|
mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF | mradermacher | 2025-05-22T03:50:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mesolitica/Malaysian-Qwen2.5-32B-Reasoning-SFT",
"base_model:quantized:mesolitica/Malaysian-Qwen2.5-32B-Reasoning-SFT",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-22T00:18:50Z | ---
base_model: mesolitica/Malaysian-Qwen2.5-32B-Reasoning-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mesolitica/Malaysian-Qwen2.5-32B-Reasoning-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-32B-Reasoning-SFT-i1-GGUF/resolve/main/Malaysian-Qwen2.5-32B-Reasoning-SFT.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pot99rta/CaptainMaid-12B-VioletMell-V0.420-GGUF | pot99rta | 2025-05-22T03:49:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:pot99rta/CaptainMaid-12B-VioletMell-V0.420",
"base_model:quantized:pot99rta/CaptainMaid-12B-VioletMell-V0.420",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T20:47:12Z | ---
base_model: pot99rta/CaptainMaid-12B-VioletMell-V0.420
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# CaptainMaid-12B-VioletMell-V0.420-GGUF

This Model Uses Mistral For The Preset.
You Can Use ChatML Too - Only Tested ChatML with Mistral Tokenizer.
The model seems to handle higher temps and random settings well in my tests.
This model was converted to GGUF format from [`pot99rta/CaptainMaid-12B-VioletMell-V0.420`](https://huggingface.co/pot99rta/CaptainMaid-12B-VioletMell-V0.420) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/pot99rta/CaptainMaid-12B-VioletMell-V0.420) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo pot99rta/CaptainMaid-12B-VioletMell-V0.420-Q8_0-GGUF --hf-file captainmaid-12b-violetmell-v0.420-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo pot99rta/CaptainMaid-12B-VioletMell-V0.420-Q8_0-GGUF --hf-file captainmaid-12b-violetmell-v0.420-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo pot99rta/CaptainMaid-12B-VioletMell-V0.420-Q8_0-GGUF --hf-file captainmaid-12b-violetmell-v0.420-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo pot99rta/CaptainMaid-12B-VioletMell-V0.420-Q8_0-GGUF --hf-file captainmaid-12b-violetmell-v0.420-q8_0.gguf -c 2048
```
|
DanielNRU/pollen-ner2-700 | DanielNRU | 2025-05-22T03:48:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:44:49Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-700
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-700
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2760
- Precision: 0.6858
- Recall: 0.7932
- F1: 0.7356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 88 | 0.2763 | 0.6856 | 0.7751 | 0.7276 |
| No log | 2.0 | 176 | 0.2760 | 0.6858 | 0.7932 | 0.7356 |
| No log | 3.0 | 264 | 0.2686 | 0.6865 | 0.7871 | 0.7334 |
| No log | 4.0 | 352 | 0.2685 | 0.6799 | 0.7892 | 0.7305 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
casque/Dhevv-Armor-Flames | casque | 2025-05-22T03:48:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-22T03:47:45Z | ---
license: creativeml-openrail-m
---
|
FormlessAI/2e689380-9e32-4cde-af94-89003b0cbef7 | FormlessAI | 2025-05-22T03:47:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:NousResearch/Genstruct-7B",
"base_model:finetune:NousResearch/Genstruct-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T00:45:25Z | ---
base_model: NousResearch/Genstruct-7B
library_name: transformers
model_name: 2e689380-9e32-4cde-af94-89003b0cbef7
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 2e689380-9e32-4cde-af94-89003b0cbef7
This model is a fine-tuned version of [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/2e689380-9e32-4cde-af94-89003b0cbef7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/xqfot957)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu118
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pot99rta/CaptainMaid-12B-VioletMell-V0.420 | pot99rta | 2025-05-22T03:46:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:pot99rta/PatriMaid-12B-Forgottenslop-NeonMell",
"base_model:merge:pot99rta/PatriMaid-12B-Forgottenslop-NeonMell",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T20:10:22Z | ---
base_model:
- Nitral-AI/Captain-Eris_Violet-V0.420-12B
- pot99rta/PatriMaid-12B-Forgottenslop-NeonMell
library_name: transformers
tags:
- mergekit
- merge
---
# CaptainMaid-12B-VioletMell-V0.420

This Model Uses Mistral For The Preset.
You Can Use ChatML Too - Only Tested ChatML with Mistral Tokenizer.
The model seems to handle higher temps and random settings well in my tests.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Captain-Eris_Violet-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-V0.420-12B)
* [pot99rta/PatriMaid-12B-Forgottenslop-NeonMell](https://huggingface.co/pot99rta/PatriMaid-12B-Forgottenslop-NeonMell)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
layer_range: [0, 40]
- model: pot99rta/PatriMaid-12B-Forgottenslop-NeonMell
layer_range: [0, 40]
merge_method: slerp
base_model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.420
dtype: bfloat16
```
|
xuan-luo/MTPQwen3-8B-T1234-Eagle-id4 | xuan-luo | 2025-05-22T03:45:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mtpqwen3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-22T03:41:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Baselhany/Graduation_Project_Distilation_Whisper_base3 | Baselhany | 2025-05-22T03:43:53Z | 34 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-08T13:30:55Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0928
- Wer: 0.2043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 1.2944 | 1.0 | 313 | 0.0886 | 0.1967 |
| 1.2819 | 2.0 | 626 | 0.0902 | 0.1923 |
| 1.2752 | 3.0 | 939 | 0.0902 | 0.1986 |
| 1.1425 | 4.0 | 1252 | 0.0915 | 0.1989 |
| 1.0812 | 5.0 | 1565 | 0.0900 | 0.1914 |
| 0.9708 | 6.0 | 1878 | 0.0900 | 0.1916 |
| 0.9029 | 7.0 | 2191 | 0.0891 | 0.1985 |
| 0.8248 | 8.0 | 2504 | 0.0896 | 0.1916 |
| 0.7778 | 9.0 | 2817 | 0.0897 | 0.1941 |
| 0.7485 | 10.0 | 3130 | 0.0890 | 0.1944 |
| 0.7219 | 11.0 | 3443 | 0.0883 | 0.1961 |
| 0.6584 | 12.0 | 3756 | 0.0889 | 0.1948 |
| 0.6516 | 13.0 | 4069 | 0.0883 | 0.1951 |
| 0.6233 | 14.0 | 4382 | 0.0882 | 0.1942 |
| 0.6017 | 14.9536 | 4680 | 0.0883 | 0.1957 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
casque/Dhevv-DragonScaleStyle | casque | 2025-05-22T03:42:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-22T03:41:50Z | ---
license: creativeml-openrail-m
---
|
yunjae-won/mp_mistral7bv3_sft_dpo_beta5e-2_epoch1_ratio_dpor_multisample | yunjae-won | 2025-05-22T03:41:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T03:37:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LowkeySuicidal/q-Taxi-v1-4x4-noSlippery | LowkeySuicidal | 2025-05-22T03:39:58Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-22T03:39:56Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="LowkeySuicidal/q-Taxi-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kjamesh/a2c-PandaReachDense-v3_TEST | kjamesh | 2025-05-22T03:38:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-21T23:43:46Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -16.07 +/- 3.99
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
binggwong/mujoco_cube_LoRa_adapter | binggwong | 2025-05-22T03:37:59Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T04:41:35Z | ---
license: apache-2.0
tags:
- unsloth
---
|
amps93/Qwen3-1.7B_qlora | amps93 | 2025-05-22T03:36:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T03:36:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shanchen/ds-limo-linearja-250 | shanchen | 2025-05-22T03:34:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:shanchen/ds-limo-ja-250",
"base_model:merge:shanchen/ds-limo-ja-250",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T03:27:49Z | ---
base_model:
- shanchen/ds-limo-ja-250
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
library_name: transformers
tags:
- mergekit
- merge
---
# mlinearja
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [shanchen/ds-limo-ja-250](https://huggingface.co/shanchen/ds-limo-ja-250)
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
parameters:
weight: 1.0
- model: shanchen/ds-limo-ja-250
parameters:
weight: 0.5
merge_method: linear
dtype: float16
```
|
PaceKW/bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-new | PaceKW | 2025-05-22T03:33:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:cahya/bert-base-indonesian-1.5G",
"base_model:finetune:cahya/bert-base-indonesian-1.5G",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-22T03:31:12Z | ---
library_name: transformers
license: mit
base_model: cahya/bert-base-indonesian-1.5G
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-new
This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3641
- F1: 0.7802
- Roc Auc: 0.8639
- Accuracy: 0.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3106 | 1.0 | 659 | 0.2504 | 0.6779 | 0.7832 | 0.5978 |
| 0.2235 | 2.0 | 1318 | 0.2113 | 0.7466 | 0.8392 | 0.6441 |
| 0.1722 | 3.0 | 1977 | 0.2283 | 0.7511 | 0.8493 | 0.6581 |
| 0.097 | 4.0 | 2636 | 0.2421 | 0.7626 | 0.8490 | 0.6874 |
| 0.0643 | 5.0 | 3295 | 0.2727 | 0.7584 | 0.8417 | 0.6938 |
| 0.0572 | 6.0 | 3954 | 0.2817 | 0.7662 | 0.8662 | 0.6737 |
| 0.0304 | 7.0 | 4613 | 0.3075 | 0.7606 | 0.8475 | 0.6879 |
| 0.021 | 8.0 | 5272 | 0.3195 | 0.7697 | 0.8626 | 0.6932 |
| 0.0157 | 9.0 | 5931 | 0.3347 | 0.7663 | 0.8477 | 0.7052 |
| 0.0095 | 10.0 | 6590 | 0.3353 | 0.7759 | 0.8598 | 0.7118 |
| 0.0086 | 11.0 | 7249 | 0.3467 | 0.7768 | 0.8590 | 0.7136 |
| 0.0063 | 12.0 | 7908 | 0.3503 | 0.7795 | 0.8644 | 0.7128 |
| 0.0046 | 13.0 | 8567 | 0.3577 | 0.7797 | 0.8613 | 0.7153 |
| 0.0037 | 14.0 | 9226 | 0.3622 | 0.7801 | 0.8674 | 0.7115 |
| 0.0046 | 15.0 | 9885 | 0.3641 | 0.7802 | 0.8639 | 0.7156 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
phospho-app/MarcWester-ACT-m7-iz22z | phospho-app | 2025-05-22T03:31:32Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-05-22T01:42:15Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [MarcWester/m7](https://huggingface.co/datasets/MarcWester/m7)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
DanielNRU/pollen-ner2-550 | DanielNRU | 2025-05-22T03:31:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:25:58Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-550
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-550
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3432
- Precision: 0.6156
- Recall: 0.7269
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 69 | 0.3899 | 0.5340 | 0.6948 | 0.6038 |
| No log | 2.0 | 138 | 0.3667 | 0.5738 | 0.6948 | 0.6285 |
| No log | 3.0 | 207 | 0.3638 | 0.5784 | 0.7108 | 0.6378 |
| No log | 4.0 | 276 | 0.3495 | 0.6007 | 0.7068 | 0.6494 |
| No log | 5.0 | 345 | 0.3547 | 0.5805 | 0.7169 | 0.6415 |
| No log | 6.0 | 414 | 0.3432 | 0.6156 | 0.7269 | 0.6667 |
| No log | 7.0 | 483 | 0.3453 | 0.6026 | 0.7369 | 0.6631 |
| 0.7026 | 8.0 | 552 | 0.3397 | 0.6142 | 0.7289 | 0.6667 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Ichi075/GEN-1 | Ichi075 | 2025-05-22T03:29:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"ja",
"dataset:AhmedSSabir/Japanese-wiki-dump-sentence-dataset",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T01:53:36Z | ---
library_name: transformers
license: mit
datasets:
- AhmedSSabir/Japanese-wiki-dump-sentence-dataset
language:
- en
- ja
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
---
# GEN-1

## About GEN-1
Model with about 600 million parameters.
Japanese-specific SLM.
## How to use
```py
from transformers import pipeline
pipe = pipeline("text-generation", model="Ichi075/GEN-1")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages)
```
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Ichi075/GEN-1")
model = AutoModelForCausalLM.from_pretrained("Ichi075/GEN-1")
```
|
dipanshu449/orpheus-tts-finetuned-model-hi-speaker-with-emotive-tags-main-test | dipanshu449 | 2025-05-22T03:28:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T03:26:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iSolver-AI/FEnet | iSolver-AI | 2025-05-22T03:27:01Z | 55 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"xclip",
"fill-mask",
"custom_code",
"arxiv:2410.06885",
"arxiv:2410.11817",
"arxiv:2410.09401",
"arxiv:2409.12883",
"arxiv:2410.11888",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | fill-mask | 2024-10-10T16:27:47Z | ---
license: mit
# language:
# - zh
# metrics:
# - accuracy
# base_model:
# - deepseek-ai/DeepSeek-V3
# - deepseek-ai/DeepSeek-V3-Base
# base_model_relation: merge
library_name: transformers
# pipeline_tag: image-text-to-text
# widget:
# - src: >-
# https://huggingface.co/iSolver-AI/FEnet/resolve/main/xiaohongshu-girls-enndme-1.jpg
# example_title: enndme-pic-1
# output:
# text: Hello my name is Julien
# - src: >-
# https://huggingface.co/iSolver-AI/FEnet/resolve/main/xiaohongshu-girls-enndme-2.jpg
# example_title: enndme-pic-2
# output:
# - label: POSITIVE
# score: 0.8
# - src: >-
# https://huggingface.co/iSolver-AI/FEnet/resolve/main/xiaohongshu-girls-enndme-3.jpg
# example_title: enndme-pic-3
# output:
# - label: POSITIVE
# score: 0.8
# tags:
# - mlx
# - llama
# - llama3
# - transformers
# - Reward Model
# - conversational
---
test webhook
# Paper:
- ✅来源于HF+arxiv,完整输入HF链接的论文:[F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching](https://huggingface.co/papers/2410.06885)
- ✅来源于HF+arxiv,完整输入arxiv链接的论文:https://arxiv.org/abs/2410.11817
- 来源于HF+arxiv,仅输入标题+编号的论文:[Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction](2409.18124)
- 来源于HF+arxiv,仅输入标题的论文:Exploring Model Kinship for Merging Large Language Models
- 来源于HF+arxiv,仅输入编号的论文:2410.12381
- ✅来源于HF+arxiv,输入链接不带https://前缀:arxiv.org/abs/2410.09401
- ✅仅来源于arxiv,完整输入arxiv链接的论文:[Improving Prototypical Parts Abstraction for Case-Based Reasoning Explanations Designed for the Kidney Stone Type Recognition](https://arxiv.org/abs/2409.12883),因为有READme引用而自动导入该paper到Daily Paper,变成arxiv和HF都有的论文
- ✅仅来源于arxiv,完整输入arxiv链接的论文:[Aharonov-Bohm effects on the GUP framework](https://arxiv.org/abs/2410.11888),会因为有READme引用而自动导入该paper到Daily Paper
- 仅来源于arxiv,仅输入编号的论文:2409.00821
- 仅来源于arxiv,仅输入标题的论文:An Augmentation-based Model Re-adaptation Framework for Robust Image Segmentation
- 非arxiv论文:
@inproceedings{DBLP:conf/nips/XuLCLQ21,
author={Yong Xu and Feng Li and Zhile Chen and Jinxiu Liang and Yuhui Quan},
title={Encoding Spatial Distribution of Convolutional Features for Texture Representation},
year={2021},
cdate={1609459200000},
pages={22732-22744},
url={https://proceedings.neurips.cc/paper/2021/hash/c04c19c2c2474dbf5f7ac4372c5b9af1-Abstract.html},
booktitle={NeurIPS},
crossref={conf/nips/2021}
}
> 数据集标题record:allenai/WildBench
> 模型标题record:==black-forest-labs/FLUX.1-dev==
> 数据集标题record:LLM360/TxT360 sasad |
mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-22T03:26:50Z | 44 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"pt",
"base_model:amadeusai/Amadeus-Verbo-BI-Qwen-2.5-14B-PT-BR-Instruct-Experimental",
"base_model:quantized:amadeusai/Amadeus-Verbo-BI-Qwen-2.5-14B-PT-BR-Instruct-Experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-05T06:12:54Z | ---
base_model: amadeusai/Amadeus-Verbo-BI-Qwen-2.5-14B-PT-BR-Instruct-Experimental
language:
- pt
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/Amadeus-Verbo-BI-Qwen-2.5-14B-PT-BR-Instruct-Experimental
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/qwen2.5-14B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
xuan-luo/MTPQwen3-8B-T1234-Eagle-mlp4 | xuan-luo | 2025-05-22T03:26:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mtpqwen3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-22T02:57:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elliotthwang/Best_KimLan-OpenChat_SFT-tw | elliotthwang | 2025-05-22T03:25:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T03:16:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF | mradermacher | 2025-05-22T03:25:32Z | 0 | 2 | transformers | [
"transformers",
"gguf",
"writing",
"en",
"dataset:SillyTilly/fiction-writer-596",
"base_model:maldv/praxis-bookwriter-llama3.1-8b-sft",
"base_model:quantized:maldv/praxis-bookwriter-llama3.1-8b-sft",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-22T00:01:54Z | ---
base_model: maldv/praxis-bookwriter-llama3.1-8b-sft
datasets:
- SillyTilly/fiction-writer-596
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- writing
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/maldv/praxis-bookwriter-llama3.1-8b-sft
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-llama3.1-8b-sft-i1-GGUF/resolve/main/praxis-bookwriter-llama3.1-8b-sft.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dong-99/Chronos-Platinum-72B-mlx-4Bit | dong-99 | 2025-05-22T03:25:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"roleplay",
"storywriting",
"qwen2.5",
"finetune",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:ZeusLabs/Chronos-Platinum-72B",
"base_model:quantized:ZeusLabs/Chronos-Platinum-72B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-05-22T03:23:01Z | ---
base_model: ZeusLabs/Chronos-Platinum-72B
tags:
- roleplay
- storywriting
- qwen2.5
- finetune
- transformers
- pytorch
- mlx
- mlx-my-repo
---
# dong-99/Chronos-Platinum-72B-mlx-4Bit
The Model [dong-99/Chronos-Platinum-72B-mlx-4Bit](https://huggingface.co/dong-99/Chronos-Platinum-72B-mlx-4Bit) was converted to MLX format from [ZeusLabs/Chronos-Platinum-72B](https://huggingface.co/ZeusLabs/Chronos-Platinum-72B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("dong-99/Chronos-Platinum-72B-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ZZains/test | ZZains | 2025-05-22T03:23:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T03:23:40Z | ---
license: apache-2.0
---
|
zhaoguangxiang/Qwen2.5-1.5B-Open-R1-GRPO | zhaoguangxiang | 2025-05-22T03:22:42Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T12:11:51Z | ---
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-GRPO
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-GRPO
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zhaoguangxiang/Qwen2.5-1.5B-Open-R1-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhaoguangxiang/huggingface/runs/5eu1rg6j)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sebastianmr18/xlm-roberta-ner-qlora-bs8 | sebastianmr18 | 2025-05-22T03:21:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:adapter:FacebookAI/xlm-roberta-large",
"region:us"
] | null | 2025-05-22T02:07:19Z | ---
base_model: xlm-roberta-large
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
DanielNRU/pollen-ner2-450 | DanielNRU | 2025-05-22T03:21:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:15:53Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-450
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-450
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4605
- Precision: 0.4883
- Recall: 0.6305
- F1: 0.5504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 57 | 0.5754 | 0.4231 | 0.5361 | 0.4730 |
| No log | 2.0 | 114 | 0.5404 | 0.4237 | 0.5462 | 0.4772 |
| No log | 3.0 | 171 | 0.5230 | 0.4407 | 0.5743 | 0.4987 |
| No log | 4.0 | 228 | 0.5053 | 0.4470 | 0.5843 | 0.5065 |
| No log | 5.0 | 285 | 0.4844 | 0.4619 | 0.5843 | 0.5160 |
| No log | 6.0 | 342 | 0.4810 | 0.4708 | 0.6145 | 0.5331 |
| No log | 7.0 | 399 | 0.4710 | 0.4784 | 0.6225 | 0.5410 |
| No log | 8.0 | 456 | 0.4631 | 0.4822 | 0.6245 | 0.5442 |
| 0.9019 | 9.0 | 513 | 0.4615 | 0.4852 | 0.6265 | 0.5469 |
| 0.9019 | 10.0 | 570 | 0.4605 | 0.4883 | 0.6305 | 0.5504 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Cloudmaster/Llama-3.2-3B-4bit-group128-exllamav2 | Cloudmaster | 2025-05-22T03:21:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-05-22T03:13:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thecode12company/edwardzabalacode-model01 | thecode12company | 2025-05-22T03:19:32Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-22T02:53:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Edwardzabalacode Model01
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/thecode12company/edwardzabalacode-model01/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thecode12company/edwardzabalacode-model01', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/thecode12company/edwardzabalacode-model01/discussions) to add images that show off what you’ve made with this LoRA.
|
reachomk/gen2seg-sd | reachomk | 2025-05-22T03:18:31Z | 34 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2505.15263",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-17T11:14:39Z | ---
base_model:
- stabilityai/stable-diffusion-2
---
# gen2seg: Generative Models Enable Generalizable Instance Segmentation
<img src='teaser.png'/>
This is the official model release for the Stable Diffusion 2 (SD) variant of our `gen2seg` generative instance segmenter. It is the same checkpoint we used to generate figures in the paper.
Paper: https://arxiv.org/abs/2505.15263
Please see our website https://reachomk.github.io/gen2seg for demos and additional qualitative samples.
If you are looking for our MAE-H variant, you can find that at https://huggingface.co/reachomk/gen2seg-mae-h
You can run this model at our GitHub: https://github.com/UCDVision/gen2seg or our Huggingface Space: https://huggingface.co/spaces/reachomk/gen2seg |
pasithbas159/Qwen2.5_HII_satellite_v1 | pasithbas159 | 2025-05-22T03:18:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-23T12:29:07Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pasithbas159
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DanielNRU/pollen-ner2-400 | DanielNRU | 2025-05-22T03:15:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:10:47Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-400
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-400
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6034
- Precision: 0.4117
- Recall: 0.4960
- F1: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 50 | 0.8021 | 0.3428 | 0.1948 | 0.2484 |
| No log | 2.0 | 100 | 0.7462 | 0.3190 | 0.2390 | 0.2732 |
| No log | 3.0 | 150 | 0.7122 | 0.3388 | 0.3293 | 0.3340 |
| No log | 4.0 | 200 | 0.6697 | 0.3721 | 0.3594 | 0.3657 |
| No log | 5.0 | 250 | 0.6542 | 0.3978 | 0.4297 | 0.4131 |
| No log | 6.0 | 300 | 0.6287 | 0.4071 | 0.4357 | 0.4210 |
| No log | 7.0 | 350 | 0.6155 | 0.4011 | 0.4518 | 0.4249 |
| No log | 8.0 | 400 | 0.6096 | 0.4068 | 0.4779 | 0.4395 |
| No log | 9.0 | 450 | 0.6042 | 0.4132 | 0.4920 | 0.4491 |
| 1.1014 | 10.0 | 500 | 0.6034 | 0.4117 | 0.4960 | 0.4499 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
YUGOROU/Step2Modelv0.2 | YUGOROU | 2025-05-22T03:15:25Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T03:13:41Z | ---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YUGOROU
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GalaDev/Gala | GalaDev | 2025-05-22T03:14:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T03:14:58Z | ---
license: apache-2.0
---
|
alexlop/detr-t5-finetuned | alexlop | 2025-05-22T03:14:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-21T15:08:04Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: detr-t5-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-t5-finetuned
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
OpenGVLab/InternVideo2_CLIP_S | OpenGVLab | 2025-05-22T03:12:46Z | 0 | 0 | null | [
"safetensors",
"internvideo2",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T01:06:53Z | ---
license: apache-2.0
---
|
DanielNRU/pollen-ner2-350 | DanielNRU | 2025-05-22T03:10:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:06:13Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-350
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-350
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8648
- Precision: 0.4745
- Recall: 0.1305
- F1: 0.2047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 44 | 1.1176 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 88 | 1.0785 | 0.2143 | 0.0060 | 0.0117 |
| No log | 3.0 | 132 | 1.0249 | 0.3478 | 0.0161 | 0.0307 |
| No log | 4.0 | 176 | 0.9895 | 0.4524 | 0.0382 | 0.0704 |
| No log | 5.0 | 220 | 0.9502 | 0.5088 | 0.0582 | 0.1045 |
| No log | 6.0 | 264 | 0.9204 | 0.4559 | 0.0622 | 0.1095 |
| No log | 7.0 | 308 | 0.8944 | 0.4819 | 0.0803 | 0.1377 |
| No log | 8.0 | 352 | 0.8794 | 0.4685 | 0.1044 | 0.1708 |
| No log | 9.0 | 396 | 0.8661 | 0.472 | 0.1185 | 0.1894 |
| No log | 10.0 | 440 | 0.8648 | 0.4745 | 0.1305 | 0.2047 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
0hFywu7sWF24/xcvbvxcb | 0hFywu7sWF24 | 2025-05-22T03:10:15Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-22T03:10:15Z | ---
license: bigscience-bloom-rail-1.0
---
|
kwstisskeyi/xcvzxcv | kwstisskeyi | 2025-05-22T03:10:01Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-22T03:10:01Z | ---
license: bigcode-openrail-m
---
|
samuelchristlie/Wan2.1-T2V-1.3B-GGUF | samuelchristlie | 2025-05-22T03:07:42Z | 48 | 0 | diffusers | [
"diffusers",
"gguf",
"video",
"video-generation",
"text-to-video",
"en",
"base_model:Wan-AI/Wan2.1-T2V-1.3B",
"base_model:quantized:Wan-AI/Wan2.1-T2V-1.3B",
"license:apache-2.0",
"region:us"
] | text-to-video | 2025-05-19T05:33:19Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-to-video
library_name: diffusers
tags:
- video
- video-generation
base_model:
- Wan-AI/Wan2.1-T2V-1.3B
---
```
________ ______ ____ _______ ______ ___ ___ ____ ______ ______ _______ _______ _______ _______
| | | |.---.-.-----.|__ | |_ | _____|_ _|__ | | |_____|_ | |__ | __ \______| __| __| | | ___|
| | | || _ | || __|__ _| ||______|| | | __| | |______|| |_ __|__ | __ <______| | | | | | | ___|
|________||___._|__|__||______|__|______| |___| |______|\_____/ |______|__|______|______/ |_______|_______|_______|___|
```
# Wan-2.1-T2V-1.3B-GGUF
## Direct GGUF Conversion of Wan2.1-T2V-1.3B
Wan2.1 is an open-source suite of video foundation models, compatible with consumer-grade GPUs, that excels in various video generation tasks like text-to-video, image-to-video, and video editing, even supporting visual text generation.
## Table of Contents 📝
1. ▶ [Usage](#usage)
2. 📃 [License](#license)
3. 🙏 [Acknowledgements](#acknowledgements)
<a name="usage"/>
## ▶ Usage
Download models using `huggingface-cli`:
```
pip install "huggingface_hub[cli]"
huggingface-cli download samuelchristlie/Wan2.1-T2V-1.3B-GGUF --local-dir ./Wan2.1-T2V-1.3B-GGUF
```
You can also download directly from [this page](https://huggingface.co/samuelchristlie/Wan2.1-T2V-1.3B-GGUF/tree/main).
<a name="license"/>
## 📃 License
This model is a derivative work of the original model licensed under the Apache 2.0 License, and is therefore distributed under the terms of the same license.
<a name="acknowledgements"/>
## 🙏 Acknowledgements
Thanks to Patrick Gillespie for creating the ASCII text art tool used in this project
https://patorjk.com/software/taag/
Wan-AI for the Wan model
https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B
https://huggingface.co/city96
</div> |
allura-quants/allura-org_Q3-30b-A3b-Pentiment_EXL3_6.0bpw_H6 | allura-quants | 2025-05-22T03:06:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"exl3",
"conversational",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"autotrain_compatible",
"endpoints_compatible",
"6-bit",
"region:us"
] | text-generation | 2025-05-22T02:59:07Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
---
# Pentiment

|
allura-quants/allura-org_Q3-30b-A3b-Pentiment_EXL3_5.0bpw_H6 | allura-quants | 2025-05-22T03:06:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"exl3",
"conversational",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"autotrain_compatible",
"endpoints_compatible",
"5-bit",
"region:us"
] | text-generation | 2025-05-22T02:56:46Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
---
# Pentiment

|
allura-quants/allura-org_Q3-30b-A3b-Pentiment_EXL3_4.5bpw_H6 | allura-quants | 2025-05-22T03:06:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"exl3",
"conversational",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T02:54:32Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
---
# Pentiment

|
allura-quants/allura-org_Q3-30b-A3b-Pentiment_EXL3_4.0bpw_H6 | allura-quants | 2025-05-22T03:06:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"exl3",
"conversational",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-05-22T02:52:25Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
---
# Pentiment

|
DanielNRU/pollen-ner2-300 | DanielNRU | 2025-05-22T03:06:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:04:43Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-300
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-300
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1385
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|
| No log | 1.0 | 38 | 1.1385 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 76 | 1.0968 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 114 | 1.0804 | 0.0 | 0.0 | 0.0 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
allura-quants/allura-org_Q3-30b-A3b-Pentiment_EXL3_3.5bpw_H6 | allura-quants | 2025-05-22T03:05:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"exl3",
"conversational",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T02:50:09Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
---
# Pentiment

|
Omar401/llam3_esi | Omar401 | 2025-05-22T03:05:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-22T01:16:25Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: workspace/data/outputs/llama3_esi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
# Adapter & Model
adapter: lora
base_model: meta-llama/Meta-Llama-3-8B-Instruct
bf16: auto
load_in_8bit: true
special_tokens:
pad_token: "<PAD>"
# Dataset
dataset_processes: 32
datasets:
- path: /workspace/data/alpaca_esi_dataset.jsonl
type: alpaca
trust_remote_code: false
message_property_mappings:
instruction: instruction
input: input
output: output
# Output
output_dir: /workspace/data/outputs/llama3_esi
# Training Parameters
sequence_len: 1024
micro_batch_size: 64
gradient_accumulation_steps: 1
gradient_checkpointing: true
num_epochs: 3
learning_rate: 0.0002
optimizer: adamw_bnb_8bit
# LoRA Settings
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
# Trainer Settings
train_on_inputs: false
save_strategy: epoch
save_total_limit: 1
save_safetensors: true
logging_steps: 10
tokenizer_pad_to_eos_token: true
# Misc
shuffle_merged_datasets: true
skip_prepare_dataset: false
strict: false
ray_num_workers: 1
resources_per_worker:
GPU: 1
use_ray: false
val_set_size: 0.0
weight_decay: 0.0
# TRL settings for compatibility
trl:
log_completions: false
ref_model_mixup_alpha: 0.9
ref_model_sync_steps: 64
sync_ref_model: false
use_vllm: false
vllm_device: auto
vllm_dtype: auto
vllm_gpu_memory_utilization: 0.9
```
</details><br>
# workspace/data/outputs/llama3_esi
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the /workspace/data/alpaca_esi_dataset.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
allura-quants/allura-org_Q3-30b-A3b-Pentiment_EXL3_3.0bpw_H6 | allura-quants | 2025-05-22T03:04:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"exl3",
"conversational",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"autotrain_compatible",
"endpoints_compatible",
"3-bit",
"region:us"
] | text-generation | 2025-05-22T02:48:33Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
---
# Pentiment

|
DanielNRU/pollen-ner2-250 | DanielNRU | 2025-05-22T03:04:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:03:24Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-250
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1477
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|
| No log | 1.0 | 32 | 1.1477 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 64 | 1.1336 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 96 | 1.1126 | 0.0 | 0.0 | 0.0 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
DanielNRU/pollen-ner2-150 | DanielNRU | 2025-05-22T03:02:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:01:08Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-150
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6448
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|
| No log | 1.0 | 19 | 1.6448 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 38 | 1.2916 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 57 | 1.1667 | 0.0 | 0.0 | 0.0 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
DanielNRU/pollen-ner2-100 | DanielNRU | 2025-05-22T03:00:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-05-22T03:00:06Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-100
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0243
- Precision: 0.0057
- Recall: 0.0141
- F1: 0.0081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 13 | 2.0243 | 0.0057 | 0.0141 | 0.0081 |
| No log | 2.0 | 26 | 1.7798 | 0.0034 | 0.0020 | 0.0025 |
| No log | 3.0 | 39 | 1.5304 | 0.0 | 0.0 | 0.0 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
JunseongLEEE/llama-3.2-1b-sft-dpo | JunseongLEEE | 2025-05-22T02:55:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T02:55:45Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/CharGen-v3-beta-263-s98-GGUF | mradermacher | 2025-05-22T02:53:48Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:CharGen-Archive/CharGen-v3-beta-263-s98",
"base_model:quantized:CharGen-Archive/CharGen-v3-beta-263-s98",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-14T13:48:42Z | ---
base_model: CharGen-Archive/CharGen-v3-beta-263-s98
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/CharGen-Archive/CharGen-v3-beta-263-s98
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-263-s98-GGUF/resolve/main/CharGen-v3-beta-263-s98.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
yhessyradh/xcvzxcv | yhessyradh | 2025-05-22T02:53:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-22T02:53:22Z | ---
license: creativeml-openrail-m
---
|
darolraiko66/xcvzxcv | darolraiko66 | 2025-05-22T02:53:23Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-22T02:53:21Z | ---
license: bigcode-openrail-m
---
|
csukuangfj/spleeter-checkpoints | csukuangfj | 2025-05-22T02:52:49Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T02:47:37Z | # Introduction
Checkpoints are from
https://huggingface.co/csukuangfj/spleeter-torch |
vitasomegood337/vitasomegood337 | vitasomegood337 | 2025-05-22T02:52:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T02:52:26Z | ---
license: apache-2.0
---
|
shanchen/ds-limo-mer4ge-250 | shanchen | 2025-05-22T02:52:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:shanchen/ds-limo-fr-250",
"base_model:merge:shanchen/ds-limo-fr-250",
"base_model:shanchen/ds-limo-ja-250",
"base_model:merge:shanchen/ds-limo-ja-250",
"base_model:shanchen/ds-limo-te-250",
"base_model:merge:shanchen/ds-limo-te-250",
"base_model:shanchen/ds-limo-th-250",
"base_model:merge:shanchen/ds-limo-th-250",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T02:46:35Z | ---
base_model:
- shanchen/ds-limo-te-250
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- shanchen/ds-limo-th-250
- shanchen/ds-limo-ja-250
- shanchen/ds-limo-fr-250
library_name: transformers
tags:
- mergekit
- merge
---
# model1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) as a base.
### Models Merged
The following models were included in the merge:
* [shanchen/ds-limo-te-250](https://huggingface.co/shanchen/ds-limo-te-250)
* [shanchen/ds-limo-th-250](https://huggingface.co/shanchen/ds-limo-th-250)
* [shanchen/ds-limo-ja-250](https://huggingface.co/shanchen/ds-limo-ja-250)
* [shanchen/ds-limo-fr-250](https://huggingface.co/shanchen/ds-limo-fr-250)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: shanchen/ds-limo-fr-250
parameters:
density: 0.25
weight: 0.25
- model: shanchen/ds-limo-th-250
parameters:
density: 0.25
weight: 0.25
- model: shanchen/ds-limo-te-250
parameters:
density: 0.25
weight: 0.25
- model: shanchen/ds-limo-ja-250
parameters:
density: 0.25
weight: 0.25
merge_method: ties
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF | mradermacher | 2025-05-22T02:52:05Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:openai/summarize_from_feedback",
"base_model:ernie-research/TLDR-Gemma-7B-MA-PPO-Fixed5",
"base_model:quantized:ernie-research/TLDR-Gemma-7B-MA-PPO-Fixed5",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-02-15T02:45:28Z | ---
base_model: ernie-research/TLDR-Gemma-7B-MA-PPO-Fixed5
datasets:
- openai/summarize_from_feedback
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ernie-research/TLDR-Gemma-7B-MA-PPO-Fixed5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q2_K.gguf) | Q2_K | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q3_K_S.gguf) | Q3_K_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q5_K_S.gguf) | Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q5_K_M.gguf) | Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q6_K.gguf) | Q6_K | 7.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-7B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-7B-MA-PPO-Fixed5.f16.gguf) | f16 | 17.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF | mradermacher | 2025-05-22T02:51:38Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:openai/summarize_from_feedback",
"base_model:ernie-research/TLDR-Gemma-2B-MA-PPO-Fixed5",
"base_model:quantized:ernie-research/TLDR-Gemma-2B-MA-PPO-Fixed5",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-02-15T02:53:27Z | ---
base_model: ernie-research/TLDR-Gemma-2B-MA-PPO-Fixed5
datasets:
- openai/summarize_from_feedback
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ernie-research/TLDR-Gemma-2B-MA-PPO-Fixed5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TLDR-Gemma-2B-MA-PPO-Fixed5-GGUF/resolve/main/TLDR-Gemma-2B-MA-PPO-Fixed5.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sebastianmr18/xlm-roberta-ner-qlora-bs16 | sebastianmr18 | 2025-05-22T02:50:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:adapter:FacebookAI/xlm-roberta-large",
"region:us"
] | null | 2025-05-22T02:50:39Z | ---
base_model: xlm-roberta-large
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF | mradermacher | 2025-05-22T02:49:51Z | 37 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:codeparrot/apps",
"base_model:ernie-research/APPS-Gemma-7B-MA-PPO-Fixed10",
"base_model:quantized:ernie-research/APPS-Gemma-7B-MA-PPO-Fixed10",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-15T15:18:31Z | ---
base_model: ernie-research/APPS-Gemma-7B-MA-PPO-Fixed10
datasets:
- codeparrot/apps
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ernie-research/APPS-Gemma-7B-MA-PPO-Fixed10
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q2_K.gguf) | i1-Q2_K | 3.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ3_S.gguf) | i1-IQ3_S | 4.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q4_0.gguf) | i1-Q4_0 | 5.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q4_1.gguf) | i1-Q4_1 | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/APPS-Gemma-7B-MA-PPO-Fixed10-i1-GGUF/resolve/main/APPS-Gemma-7B-MA-PPO-Fixed10.i1-Q6_K.gguf) | i1-Q6_K | 7.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DanielNRU/pollen_re2 | DanielNRU | 2025-05-22T02:48:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-22T02:35:14Z | ---
library_name: transformers
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: pollen-re-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-re-model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5235
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 422 | 0.6813 | 0.3148 |
| 0.6396 | 2.0 | 844 | 0.6553 | 0.4260 |
| 0.6787 | 3.0 | 1266 | 0.5011 | 0.5496 |
| 0.4929 | 4.0 | 1688 | 0.5218 | 0.6561 |
| 0.3969 | 5.0 | 2110 | 0.5235 | 0.8505 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1
|
jinx2321/byt5-tagged-1e4-paper | jinx2321 | 2025-05-22T02:48:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-21T21:29:03Z | ---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-1e4-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
kittyjosh111/jill-stinrgray-merged-fp16 | kittyjosh111 | 2025-05-22T02:47:32Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-19T18:00:21Z | ---
base_model: llama3.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kittyjosh111
- **License:** apache-2.0
- **Finetuned from model :** llama3.2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
---
## jill
This is a llm fine-tuned off of the dialogue of Jill Stingray from the game [Va11-Hall-A](https://store.steampowered.com/app/447530/VA11_HallA_Cyberpunk_Bartender_Action/). It is based off of llama3.2:3b (which is linked below). While it does work, this llm will frequently think to itself (like how Jill often does) or may even refuse to respond (Jill tends to do that sometimes).
Overall, is it a good model? Meh. With the right system prompt, it's actually kinda nice. But if it's not role-playing as Jill, I wouldn't say so.
But does it work? Yea. And for my first fine-tuning, honestly it's better than I expected.
I had many issues with Unsloth. Training actually went smoothly, but I had issues downloading the base model (had to manually download it and load it locally), as well as saving as a gguf (which I had to resolve using llama.cpp cli manually). Anyway, I modified the instructions from their free google colab notebooks, then ran it as a jupyter notebook on my local T550 Nvidia laptop GPU.
Would I still recommend unsloth? Honestly, yes. It was the only library I used that actually worked out in the end. I bet running the notebooks on Google Colab would lead to less errors simply because its more reproducible.
The stats for the training of this llm are below:
- Ran on Python3.10, EndeavourOS (linux)
- 2116.7746 seconds used for training.
- 35.28 minutes used for training.
- Peak reserved memory = 3.33 GB.
- Peak reserved memory for training = 0.0 GB.
- Peak reserved memory % of max memory = 91.685 %.
- Peak reserved memory for training % of max memory = 0.0 %.
- Torch Version: 2.7.0+cu128
- CUDA Available: True
- CUDA Device: NVIDIA T550 Laptop GPU
---
### Links
- Va11-Hall-A. [steam link](https://store.steampowered.com/app/447530/VA11_HallA_Cyberpunk_Bartender_Action/)
- Model: [https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored)
- Dataset: [https://github.com/NoPlagiarism/va11halla-dialogues](https://github.com/NoPlagiarism/va11halla-dialogues) (did some formatting to make it a ShareGPT format)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
csukuangfj/spleeter-torch | csukuangfj | 2025-05-22T02:46:51Z | 0 | 6 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-08-24T07:56:32Z | ---
license: apache-2.0
---
This repository contains the PyTorch checkpoints
of tensorflow models from [spleeter][spleeter].
[spleeter]: https://github.com/deezer/spleeter
|
risolmayo/3784648b-e514-440f-84fe-83880b10afec | risolmayo | 2025-05-22T02:45:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T02:45:41Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
LuyiCui/DeepSeek-R1-Distill-Qwen-1.5B-DPO-1 | LuyiCui | 2025-05-22T02:45:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"dpo",
"conversational",
"dataset:LuyiCui/numina-deepseek-r1-qwen-7b-efficient-1-preference",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-15T18:01:46Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
FormlessAI/849ebe32-8a12-441a-9de1-0cfd666c03c7 | FormlessAI | 2025-05-22T02:45:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/SmolLM-360M",
"base_model:finetune:unsloth/SmolLM-360M",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T00:33:35Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Shannonjunior/d3eb5639-2805-4578-915f-14c46adc97cd | Shannonjunior | 2025-05-22T02:44:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T02:44:11Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MinaMila/gemma2_2b_unlearned_gu_LoRa_Adult_ep1_22 | MinaMila | 2025-05-22T02:44:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T02:44:03Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
RichardErkhov/Emilioi99_-_Llama3_8B_finetuned-gguf | RichardErkhov | 2025-05-22T02:43:00Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-22T00:37:30Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
dabrown/2d935fe8-fd25-4db8-8e36-effa1f7adf4f | dabrown | 2025-05-22T02:42:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T02:30:47Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Subsets and Splits