modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 06:27:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
csukuangfj/vits-piper-de_DE-glados-high | csukuangfj | 2025-06-16T07:36:47Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:07:49Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
csukuangfj/vits-piper-de_DE-glados-low | csukuangfj | 2025-06-16T07:36:27Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:07:24Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
aieng-lab/roberta-large_review-aspect | aieng-lab | 2025-06-16T07:36:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:36:07Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- roberta-large
pipeline_tag: text-classification
---
# RoBERTa large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [roberta-large](https://huggingface.co/roberta-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/bert-large-cased_review-aspect | aieng-lab | 2025-06-16T07:34:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:34:44Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-large-cased
pipeline_tag: text-classification
---
# BERT large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-large-cased](https://huggingface.co/bert-large-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/bert-base-cased_review-aspect | aieng-lab | 2025-06-16T07:34:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:34:07Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-base-cased
pipeline_tag: text-classification
---
# BERT base for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF | Triangle104 | 2025-06-16T07:33:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:24:06Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -c 2048
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.25_epoch1 | MinaMila | 2025-06-16T07:32:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:30:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xfjcoder/llama3.1-8b-erged-6bit | xfjcoder | 2025-06-16T07:31:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:31:17Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xfjcoder
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
goodcasper/kvasir_seg_rtdetr_r18 | goodcasper | 2025-06-16T07:29:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"rt_detr_v2",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_r18vd_coco_o365",
"base_model:finetune:PekingU/rtdetr_r18vd_coco_o365",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-06-15T14:04:10Z | ---
library_name: transformers
license: apache-2.0
base_model: PekingU/rtdetr_r18vd_coco_o365
tags:
- generated_from_trainer
model-index:
- name: kvasir_seg_rtdetr_r18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kvasir_seg_rtdetr_r18
This model is a fine-tuned version of [PekingU/rtdetr_r18vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r18vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8559
- Map: 0.6996
- Map 50: 0.9114
- Map 75: 0.768
- Map Small: 0.0
- Map Medium: 0.4886
- Map Large: 0.7135
- Mar 1: 0.7199
- Mar 10: 0.8393
- Mar 100: 0.8915
- Mar Small: 0.0
- Mar Medium: 0.84
- Mar Large: 0.8985
- Map Polyp: 0.6996
- Mar 100 Polyp: 0.8915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Polyp | Mar 100 Polyp |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:---------:|:-------------:|
| 328.7584 | 1.0 | 100 | 105.4353 | 0.0217 | 0.0467 | 0.0164 | 0.0 | 0.0 | 0.0235 | 0.0844 | 0.2213 | 0.2839 | 0.0 | 0.0 | 0.2995 | 0.0217 | 0.2839 |
| 54.5087 | 2.0 | 200 | 29.3093 | 0.0641 | 0.1127 | 0.0625 | 0.0 | 0.1482 | 0.0671 | 0.2147 | 0.5171 | 0.6886 | 0.0 | 0.24 | 0.7145 | 0.0641 | 0.6886 |
| 18.273 | 3.0 | 300 | 13.9644 | 0.195 | 0.2841 | 0.2099 | 0.0 | 0.0152 | 0.2089 | 0.3654 | 0.6815 | 0.7872 | 0.0 | 0.59 | 0.801 | 0.195 | 0.7872 |
| 12.3686 | 4.0 | 400 | 9.2201 | 0.2061 | 0.2977 | 0.2192 | 0.0 | 0.0328 | 0.2194 | 0.3621 | 0.7118 | 0.8924 | 0.0 | 0.7 | 0.9065 | 0.2061 | 0.8924 |
| 10.3152 | 5.0 | 500 | 7.9569 | 0.4353 | 0.561 | 0.4655 | 0.0 | 0.1308 | 0.4571 | 0.5218 | 0.7815 | 0.8905 | 0.0 | 0.65 | 0.907 | 0.4353 | 0.8905 |
| 9.877 | 6.0 | 600 | 8.6031 | 0.3128 | 0.399 | 0.3417 | 0.0 | 0.1584 | 0.3279 | 0.4848 | 0.7682 | 0.91 | 0.0 | 0.76 | 0.922 | 0.3128 | 0.91 |
| 9.3949 | 7.0 | 700 | 7.6619 | 0.5258 | 0.7045 | 0.5629 | 0.0 | 0.1969 | 0.5489 | 0.5844 | 0.7763 | 0.9185 | 0.0 | 0.67 | 0.9355 | 0.5258 | 0.9185 |
| 8.9883 | 8.0 | 800 | 7.7975 | 0.4233 | 0.57 | 0.4376 | 0.0 | 0.1904 | 0.4408 | 0.5422 | 0.791 | 0.9128 | 0.0 | 0.82 | 0.922 | 0.4233 | 0.9128 |
| 8.5345 | 9.0 | 900 | 7.6628 | 0.2454 | 0.3437 | 0.2641 | 0.0 | 0.2014 | 0.2574 | 0.391 | 0.7118 | 0.9028 | 0.0 | 0.75 | 0.915 | 0.2454 | 0.9028 |
| 8.3354 | 10.0 | 1000 | 8.2167 | 0.2949 | 0.4087 | 0.3156 | 0.0 | 0.1208 | 0.3114 | 0.4507 | 0.7199 | 0.8872 | 0.0 | 0.6 | 0.906 | 0.2949 | 0.8872 |
| 8.044 | 11.0 | 1100 | 7.1574 | 0.4414 | 0.5745 | 0.4718 | 0.0 | 0.2895 | 0.461 | 0.5569 | 0.7905 | 0.9076 | 0.0 | 0.78 | 0.9185 | 0.4414 | 0.9076 |
| 8.0058 | 12.0 | 1200 | 7.0897 | 0.344 | 0.4723 | 0.3711 | 0.0 | 0.2359 | 0.3603 | 0.5246 | 0.8232 | 0.9137 | 0.0 | 0.78 | 0.925 | 0.344 | 0.9137 |
| 7.8317 | 13.0 | 1300 | 7.3114 | 0.3671 | 0.5186 | 0.3898 | 0.0 | 0.1734 | 0.3846 | 0.5351 | 0.7711 | 0.9081 | 0.0 | 0.8 | 0.918 | 0.3671 | 0.9081 |
| 7.5776 | 14.0 | 1400 | 7.5346 | 0.3043 | 0.4337 | 0.3217 | 0.0 | 0.1102 | 0.3199 | 0.473 | 0.7773 | 0.9109 | 0.0 | 0.79 | 0.9215 | 0.3043 | 0.9109 |
| 7.367 | 15.0 | 1500 | 6.6930 | 0.4188 | 0.5544 | 0.4475 | 0.0 | 0.2315 | 0.4357 | 0.546 | 0.8142 | 0.9218 | 0.0 | 0.84 | 0.9305 | 0.4188 | 0.9218 |
| 7.249 | 16.0 | 1600 | 6.7257 | 0.4509 | 0.6084 | 0.4975 | 0.0 | 0.2494 | 0.4683 | 0.5981 | 0.8469 | 0.909 | 0.0 | 0.8 | 0.919 | 0.4509 | 0.909 |
| 7.1063 | 17.0 | 1700 | 7.8123 | 0.3076 | 0.4209 | 0.3293 | 0.0 | 0.1455 | 0.3224 | 0.4943 | 0.7412 | 0.9043 | 0.0 | 0.77 | 0.9155 | 0.3076 | 0.9043 |
| 7.1784 | 18.0 | 1800 | 7.4297 | 0.3582 | 0.4877 | 0.3739 | 0.0 | 0.1887 | 0.3749 | 0.5071 | 0.7962 | 0.9166 | 0.0 | 0.74 | 0.93 | 0.3582 | 0.9166 |
| 7.0446 | 19.0 | 1900 | 7.1160 | 0.3486 | 0.4704 | 0.3853 | 0.0 | 0.203 | 0.3635 | 0.5555 | 0.7957 | 0.909 | 0.0 | 0.78 | 0.92 | 0.3486 | 0.909 |
| 7.0708 | 20.0 | 2000 | 6.9090 | 0.4352 | 0.6138 | 0.4673 | 0.0 | 0.1152 | 0.455 | 0.5341 | 0.7995 | 0.9185 | 0.0 | 0.87 | 0.9255 | 0.4352 | 0.9185 |
| 7.0069 | 21.0 | 2100 | 6.4302 | 0.4221 | 0.5845 | 0.439 | 0.0 | 0.1887 | 0.44 | 0.5038 | 0.828 | 0.9194 | 0.0 | 0.8 | 0.93 | 0.4221 | 0.9194 |
| 6.9179 | 22.0 | 2200 | 6.8735 | 0.2883 | 0.4071 | 0.3019 | 0.0 | 0.1738 | 0.301 | 0.4047 | 0.7839 | 0.9009 | 0.0 | 0.77 | 0.912 | 0.2883 | 0.9009 |
| 6.7794 | 23.0 | 2300 | 6.9011 | 0.4379 | 0.6018 | 0.481 | 0.0 | 0.1178 | 0.4587 | 0.5265 | 0.7915 | 0.9052 | 0.0 | 0.8 | 0.915 | 0.4379 | 0.9052 |
| 6.9367 | 24.0 | 2400 | 5.8450 | 0.5848 | 0.7835 | 0.6384 | 0.0 | 0.1811 | 0.61 | 0.6313 | 0.8398 | 0.9171 | 0.0 | 0.77 | 0.929 | 0.5848 | 0.9171 |
| 6.8596 | 25.0 | 2500 | 6.6438 | 0.3867 | 0.5434 | 0.4014 | 0.0 | 0.173 | 0.4041 | 0.5246 | 0.7773 | 0.8995 | 0.0 | 0.77 | 0.9105 | 0.3867 | 0.8995 |
| 6.7357 | 26.0 | 2600 | 7.0414 | 0.4158 | 0.5564 | 0.4631 | 0.0 | 0.2366 | 0.4349 | 0.5796 | 0.7967 | 0.8948 | 0.0 | 0.72 | 0.908 | 0.4158 | 0.8948 |
| 6.7968 | 27.0 | 2700 | 6.7433 | 0.528 | 0.7024 | 0.5641 | 0.0 | 0.1794 | 0.5525 | 0.6123 | 0.8573 | 0.9209 | 0.0 | 0.79 | 0.932 | 0.528 | 0.9209 |
| 6.5425 | 28.0 | 2800 | 6.8662 | 0.4346 | 0.587 | 0.4484 | 0.0 | 0.1637 | 0.4535 | 0.6251 | 0.828 | 0.9109 | 0.0 | 0.82 | 0.92 | 0.4346 | 0.9109 |
| 6.6149 | 29.0 | 2900 | 7.1492 | 0.4277 | 0.5659 | 0.4601 | 0.0 | 0.357 | 0.4467 | 0.5716 | 0.8237 | 0.9114 | 0.0 | 0.77 | 0.923 | 0.4277 | 0.9114 |
| 6.4956 | 30.0 | 3000 | 6.4835 | 0.4805 | 0.6497 | 0.5108 | 0.0 | 0.2116 | 0.5006 | 0.6493 | 0.8431 | 0.9081 | 0.0 | 0.78 | 0.919 | 0.4805 | 0.9081 |
| 6.325 | 31.0 | 3100 | 6.4725 | 0.4354 | 0.5666 | 0.4718 | 0.0 | 0.3062 | 0.4543 | 0.6327 | 0.8403 | 0.9114 | 0.0 | 0.77 | 0.923 | 0.4354 | 0.9114 |
| 6.4792 | 32.0 | 3200 | 6.5891 | 0.4733 | 0.6166 | 0.4951 | 0.0 | 0.1749 | 0.4961 | 0.646 | 0.8355 | 0.9118 | 0.0 | 0.77 | 0.9235 | 0.4733 | 0.9118 |
| 6.4267 | 33.0 | 3300 | 6.2471 | 0.5297 | 0.6574 | 0.584 | 0.0 | 0.417 | 0.5506 | 0.6559 | 0.8649 | 0.91 | 0.0 | 0.72 | 0.924 | 0.5297 | 0.91 |
| 6.4015 | 34.0 | 3400 | 6.2159 | 0.5927 | 0.7575 | 0.6442 | 0.0 | 0.3905 | 0.614 | 0.6559 | 0.8393 | 0.9071 | 0.0 | 0.85 | 0.9145 | 0.5927 | 0.9071 |
| 6.2961 | 35.0 | 3500 | 6.9543 | 0.4289 | 0.6037 | 0.45 | 0.0 | 0.3703 | 0.442 | 0.4882 | 0.7972 | 0.8815 | 0.0 | 0.81 | 0.8895 | 0.4289 | 0.8815 |
| 6.1437 | 36.0 | 3600 | 6.2903 | 0.5107 | 0.6672 | 0.5503 | 0.0 | 0.1753 | 0.5298 | 0.6137 | 0.8436 | 0.9095 | 0.0 | 0.8 | 0.9195 | 0.5107 | 0.9095 |
| 6.1999 | 37.0 | 3700 | 6.4701 | 0.465 | 0.6323 | 0.4888 | 0.0 | 0.2612 | 0.4825 | 0.5555 | 0.7877 | 0.8905 | 0.0 | 0.81 | 0.899 | 0.465 | 0.8905 |
| 6.1331 | 38.0 | 3800 | 6.0898 | 0.461 | 0.6161 | 0.4959 | 0.0 | 0.1816 | 0.4799 | 0.6327 | 0.8218 | 0.9 | 0.0 | 0.79 | 0.91 | 0.461 | 0.9 |
| 6.1776 | 39.0 | 3900 | 6.2661 | 0.468 | 0.6355 | 0.502 | 0.0 | 0.3374 | 0.486 | 0.546 | 0.8232 | 0.9062 | 0.0 | 0.8 | 0.916 | 0.468 | 0.9062 |
| 6.018 | 40.0 | 4000 | 5.8017 | 0.5797 | 0.7618 | 0.6432 | 0.0 | 0.2132 | 0.6023 | 0.6038 | 0.8445 | 0.9043 | 0.0 | 0.85 | 0.9115 | 0.5797 | 0.9043 |
| 6.1899 | 41.0 | 4100 | 5.7870 | 0.5732 | 0.7472 | 0.6297 | 0.0 | 0.1139 | 0.5978 | 0.6517 | 0.836 | 0.9047 | 0.0 | 0.85 | 0.912 | 0.5732 | 0.9047 |
| 5.9885 | 42.0 | 4200 | 5.8262 | 0.6162 | 0.793 | 0.6722 | 0.0 | 0.36 | 0.6394 | 0.637 | 0.8573 | 0.9076 | 0.0 | 0.83 | 0.916 | 0.6162 | 0.9076 |
| 5.9469 | 43.0 | 4300 | 5.9656 | 0.5803 | 0.7559 | 0.622 | 0.0 | 0.4129 | 0.603 | 0.6483 | 0.8384 | 0.9047 | 0.0 | 0.85 | 0.912 | 0.5803 | 0.9047 |
| 5.9479 | 44.0 | 4400 | 6.0606 | 0.5878 | 0.7564 | 0.6396 | 0.0 | 0.3012 | 0.6089 | 0.6066 | 0.8479 | 0.9043 | 0.0 | 0.84 | 0.912 | 0.5878 | 0.9043 |
| 5.8736 | 45.0 | 4500 | 6.1277 | 0.5992 | 0.7708 | 0.6414 | 0.0 | 0.25 | 0.6238 | 0.6588 | 0.8474 | 0.9071 | 0.0 | 0.83 | 0.9155 | 0.5992 | 0.9071 |
| 6.0308 | 46.0 | 4600 | 6.0510 | 0.5644 | 0.7258 | 0.6109 | 0.0 | 0.3322 | 0.5876 | 0.6626 | 0.8474 | 0.9014 | 0.0 | 0.85 | 0.9085 | 0.5644 | 0.9014 |
| 5.8821 | 47.0 | 4700 | 6.1862 | 0.5191 | 0.651 | 0.5563 | 0.0 | 0.1103 | 0.5429 | 0.6441 | 0.8384 | 0.9114 | 0.0 | 0.82 | 0.9205 | 0.5191 | 0.9114 |
| 5.8174 | 48.0 | 4800 | 6.2716 | 0.5513 | 0.7038 | 0.5877 | 0.0 | 0.1605 | 0.5752 | 0.6398 | 0.8526 | 0.9142 | 0.0 | 0.84 | 0.9225 | 0.5513 | 0.9142 |
| 5.8138 | 49.0 | 4900 | 6.0443 | 0.5792 | 0.7603 | 0.6092 | 0.0 | 0.3321 | 0.5998 | 0.6393 | 0.8346 | 0.8981 | 0.0 | 0.79 | 0.908 | 0.5792 | 0.8981 |
| 5.8482 | 50.0 | 5000 | 6.5761 | 0.4936 | 0.6482 | 0.5428 | 0.0 | 0.289 | 0.5163 | 0.5758 | 0.7943 | 0.8896 | 0.0 | 0.75 | 0.901 | 0.4936 | 0.8896 |
| 5.7062 | 51.0 | 5100 | 6.1185 | 0.5283 | 0.6686 | 0.5753 | 0.0 | 0.3035 | 0.5505 | 0.6232 | 0.8341 | 0.9118 | 0.0 | 0.87 | 0.9185 | 0.5283 | 0.9118 |
| 5.6989 | 52.0 | 5200 | 6.0044 | 0.5856 | 0.7479 | 0.6369 | 0.0 | 0.2705 | 0.6085 | 0.6844 | 0.8308 | 0.8962 | 0.0 | 0.82 | 0.9045 | 0.5856 | 0.8962 |
| 5.5779 | 53.0 | 5300 | 5.9162 | 0.6011 | 0.77 | 0.6466 | 0.0 | 0.3969 | 0.6239 | 0.7028 | 0.8536 | 0.9 | 0.0 | 0.84 | 0.9075 | 0.6011 | 0.9 |
| 5.5422 | 54.0 | 5400 | 6.1687 | 0.5609 | 0.7387 | 0.6005 | 0.0 | 0.2592 | 0.5814 | 0.6218 | 0.8336 | 0.8995 | 0.0 | 0.84 | 0.907 | 0.5609 | 0.8995 |
| 5.5615 | 55.0 | 5500 | 7.0190 | 0.456 | 0.631 | 0.4893 | 0.0 | 0.3124 | 0.4725 | 0.5336 | 0.8024 | 0.8886 | 0.0 | 0.83 | 0.896 | 0.456 | 0.8886 |
| 5.5674 | 56.0 | 5600 | 6.3056 | 0.5218 | 0.7175 | 0.5431 | 0.0 | 0.4512 | 0.5397 | 0.6318 | 0.837 | 0.8967 | 0.0 | 0.86 | 0.903 | 0.5218 | 0.8967 |
| 5.4516 | 57.0 | 5700 | 5.6783 | 0.6161 | 0.8042 | 0.6533 | 0.0 | 0.4324 | 0.6357 | 0.6716 | 0.8469 | 0.9 | 0.0 | 0.87 | 0.906 | 0.6161 | 0.9 |
| 5.415 | 58.0 | 5800 | 5.9260 | 0.6019 | 0.7819 | 0.6293 | 0.0 | 0.4337 | 0.6246 | 0.6725 | 0.8408 | 0.9009 | 0.0 | 0.83 | 0.909 | 0.6019 | 0.9009 |
| 5.5503 | 59.0 | 5900 | 6.1504 | 0.5291 | 0.7104 | 0.5634 | 0.0 | 0.4015 | 0.5479 | 0.591 | 0.8502 | 0.9014 | 0.0 | 0.82 | 0.91 | 0.5291 | 0.9014 |
| 5.4917 | 60.0 | 6000 | 5.8398 | 0.5809 | 0.7735 | 0.6137 | 0.0 | 0.3969 | 0.6001 | 0.6474 | 0.8573 | 0.8934 | 0.0 | 0.79 | 0.903 | 0.5809 | 0.8934 |
| 5.5592 | 61.0 | 6100 | 5.7688 | 0.611 | 0.7941 | 0.6522 | 0.0 | 0.3962 | 0.6297 | 0.6408 | 0.8403 | 0.8995 | 0.0 | 0.84 | 0.907 | 0.611 | 0.8995 |
| 5.3401 | 62.0 | 6200 | 5.7462 | 0.5848 | 0.775 | 0.5963 | 0.0 | 0.4851 | 0.6033 | 0.6536 | 0.8474 | 0.9047 | 0.0 | 0.88 | 0.9105 | 0.5848 | 0.9047 |
| 5.3558 | 63.0 | 6300 | 5.8356 | 0.5816 | 0.7882 | 0.619 | 0.0 | 0.4334 | 0.5982 | 0.6332 | 0.8223 | 0.8791 | 0.0 | 0.9 | 0.8825 | 0.5816 | 0.8791 |
| 5.326 | 64.0 | 6400 | 6.2211 | 0.5195 | 0.707 | 0.5443 | 0.0 | 0.3136 | 0.5373 | 0.5886 | 0.8256 | 0.8877 | 0.0 | 0.83 | 0.895 | 0.5195 | 0.8877 |
| 5.278 | 65.0 | 6500 | 5.3357 | 0.6396 | 0.8254 | 0.6918 | 0.0 | 0.473 | 0.6568 | 0.6773 | 0.8545 | 0.8986 | 0.0 | 0.87 | 0.9045 | 0.6396 | 0.8986 |
| 5.315 | 66.0 | 6600 | 5.8402 | 0.6183 | 0.8016 | 0.6708 | 0.0 | 0.4752 | 0.6386 | 0.6616 | 0.8602 | 0.9005 | 0.0 | 0.87 | 0.9065 | 0.6183 | 0.9005 |
| 5.3418 | 67.0 | 6700 | 5.4617 | 0.6167 | 0.8373 | 0.6565 | 0.0 | 0.4112 | 0.6353 | 0.6588 | 0.8474 | 0.8981 | 0.0 | 0.85 | 0.905 | 0.6167 | 0.8981 |
| 5.2841 | 68.0 | 6800 | 5.4826 | 0.6696 | 0.8578 | 0.7252 | 0.0 | 0.3299 | 0.6891 | 0.6882 | 0.8526 | 0.8981 | 0.0 | 0.87 | 0.904 | 0.6696 | 0.8981 |
| 5.2984 | 69.0 | 6900 | 5.4558 | 0.6553 | 0.866 | 0.6974 | 0.0 | 0.4049 | 0.671 | 0.6863 | 0.8355 | 0.9 | 0.0 | 0.88 | 0.9055 | 0.6553 | 0.9 |
| 5.1851 | 70.0 | 7000 | 5.9259 | 0.5741 | 0.7469 | 0.6277 | 0.0 | 0.2257 | 0.5971 | 0.6161 | 0.8303 | 0.8934 | 0.0 | 0.86 | 0.8995 | 0.5741 | 0.8934 |
| 5.181 | 71.0 | 7100 | 5.4635 | 0.6607 | 0.8592 | 0.7119 | 0.0 | 0.4913 | 0.6761 | 0.6806 | 0.8412 | 0.9024 | 0.0 | 0.84 | 0.91 | 0.6607 | 0.9024 |
| 5.1508 | 72.0 | 7200 | 5.6232 | 0.611 | 0.8037 | 0.6533 | 0.0 | 0.4713 | 0.6282 | 0.654 | 0.8422 | 0.9052 | 0.0 | 0.89 | 0.9105 | 0.611 | 0.9052 |
| 5.1352 | 73.0 | 7300 | 5.6329 | 0.6132 | 0.818 | 0.6484 | 0.0 | 0.5055 | 0.6291 | 0.6659 | 0.8408 | 0.8948 | 0.0 | 0.84 | 0.902 | 0.6132 | 0.8948 |
| 5.0835 | 74.0 | 7400 | 5.2254 | 0.6655 | 0.862 | 0.7302 | 0.0 | 0.4657 | 0.6816 | 0.6948 | 0.8545 | 0.8991 | 0.0 | 0.88 | 0.9045 | 0.6655 | 0.8991 |
| 5.0506 | 75.0 | 7500 | 5.5285 | 0.6407 | 0.8335 | 0.6843 | 0.0 | 0.3756 | 0.6603 | 0.6673 | 0.8559 | 0.9052 | 0.0 | 0.87 | 0.9115 | 0.6407 | 0.9052 |
| 5.0889 | 76.0 | 7600 | 5.6099 | 0.604 | 0.8039 | 0.6636 | 0.0 | 0.4564 | 0.6204 | 0.6583 | 0.8308 | 0.8924 | 0.0 | 0.86 | 0.8985 | 0.604 | 0.8924 |
| 5.0013 | 77.0 | 7700 | 5.2094 | 0.6616 | 0.863 | 0.7203 | 0.0 | 0.3771 | 0.6808 | 0.6962 | 0.8479 | 0.8896 | 0.0 | 0.85 | 0.896 | 0.6616 | 0.8896 |
| 5.1114 | 78.0 | 7800 | 5.1092 | 0.6651 | 0.8656 | 0.7255 | 0.0 | 0.4837 | 0.6786 | 0.6882 | 0.8488 | 0.8891 | 0.0 | 0.85 | 0.8955 | 0.6651 | 0.8891 |
| 5.0461 | 79.0 | 7900 | 5.1339 | 0.6623 | 0.8689 | 0.718 | 0.0 | 0.4655 | 0.6796 | 0.6962 | 0.8455 | 0.9033 | 0.0 | 0.86 | 0.91 | 0.6623 | 0.9033 |
| 5.0452 | 80.0 | 8000 | 5.3328 | 0.6569 | 0.8528 | 0.7302 | 0.0 | 0.4636 | 0.6762 | 0.6948 | 0.8469 | 0.8972 | 0.0 | 0.88 | 0.9025 | 0.6569 | 0.8972 |
| 5.0501 | 81.0 | 8100 | 5.5217 | 0.6503 | 0.8424 | 0.6965 | 0.0 | 0.408 | 0.6679 | 0.673 | 0.8431 | 0.8957 | 0.0 | 0.86 | 0.902 | 0.6503 | 0.8957 |
| 4.9118 | 82.0 | 8200 | 5.6498 | 0.6337 | 0.8333 | 0.6797 | 0.0 | 0.3812 | 0.6538 | 0.6597 | 0.8289 | 0.8938 | 0.0 | 0.84 | 0.901 | 0.6337 | 0.8938 |
| 4.971 | 83.0 | 8300 | 5.7732 | 0.6249 | 0.823 | 0.6661 | 0.0 | 0.467 | 0.6435 | 0.6559 | 0.8389 | 0.891 | 0.0 | 0.78 | 0.901 | 0.6249 | 0.891 |
| 4.8744 | 84.0 | 8400 | 5.3769 | 0.678 | 0.8753 | 0.738 | 0.0 | 0.4127 | 0.6943 | 0.6981 | 0.8517 | 0.9005 | 0.0 | 0.9 | 0.905 | 0.678 | 0.9005 |
| 4.9024 | 85.0 | 8500 | 5.2106 | 0.6799 | 0.8813 | 0.7143 | 0.0 | 0.392 | 0.6986 | 0.7033 | 0.8427 | 0.8943 | 0.0 | 0.84 | 0.9015 | 0.6799 | 0.8943 |
| 4.8361 | 86.0 | 8600 | 5.1924 | 0.6726 | 0.8596 | 0.7193 | 0.0 | 0.3602 | 0.6921 | 0.6981 | 0.8521 | 0.9005 | 0.0 | 0.86 | 0.907 | 0.6726 | 0.9005 |
| 4.7713 | 87.0 | 8700 | 5.2097 | 0.6786 | 0.8854 | 0.7263 | 0.0 | 0.389 | 0.6958 | 0.6853 | 0.8488 | 0.8991 | 0.0 | 0.87 | 0.905 | 0.6786 | 0.8991 |
| 4.8765 | 88.0 | 8800 | 5.4051 | 0.6566 | 0.8497 | 0.7137 | 0.0 | 0.4111 | 0.6757 | 0.6773 | 0.8398 | 0.8825 | 0.0 | 0.82 | 0.89 | 0.6566 | 0.8825 |
| 4.7893 | 89.0 | 8900 | 5.3679 | 0.6497 | 0.8468 | 0.7021 | 0.0 | 0.4852 | 0.6671 | 0.6886 | 0.8393 | 0.8967 | 0.0 | 0.86 | 0.903 | 0.6497 | 0.8967 |
| 4.7177 | 90.0 | 9000 | 5.3805 | 0.6533 | 0.8489 | 0.7066 | 0.0 | 0.4542 | 0.6711 | 0.6919 | 0.8313 | 0.8915 | 0.0 | 0.82 | 0.8995 | 0.6533 | 0.8915 |
| 4.7715 | 91.0 | 9100 | 5.6189 | 0.6267 | 0.8128 | 0.6736 | 0.0 | 0.4737 | 0.6424 | 0.6564 | 0.8427 | 0.8967 | 0.0 | 0.86 | 0.903 | 0.6267 | 0.8967 |
| 4.7119 | 92.0 | 9200 | 5.5511 | 0.6398 | 0.8312 | 0.6803 | 0.0 | 0.5786 | 0.6559 | 0.6526 | 0.8318 | 0.8929 | 0.0 | 0.83 | 0.9005 | 0.6398 | 0.8929 |
| 4.695 | 93.0 | 9300 | 5.5682 | 0.6327 | 0.8241 | 0.667 | 0.0 | 0.4086 | 0.6499 | 0.6716 | 0.8327 | 0.8924 | 0.0 | 0.85 | 0.899 | 0.6327 | 0.8924 |
| 4.6644 | 94.0 | 9400 | 5.3228 | 0.662 | 0.855 | 0.6994 | 0.0 | 0.3843 | 0.68 | 0.6858 | 0.8464 | 0.8962 | 0.0 | 0.87 | 0.902 | 0.662 | 0.8962 |
| 4.6613 | 95.0 | 9500 | 5.2617 | 0.6644 | 0.8642 | 0.7088 | 0.0 | 0.5257 | 0.6779 | 0.6929 | 0.8488 | 0.8981 | 0.0 | 0.83 | 0.906 | 0.6644 | 0.8981 |
| 4.6428 | 96.0 | 9600 | 5.0369 | 0.6893 | 0.8848 | 0.7569 | 0.0 | 0.4619 | 0.7064 | 0.709 | 0.8583 | 0.8934 | 0.0 | 0.82 | 0.9015 | 0.6893 | 0.8934 |
| 4.6383 | 97.0 | 9700 | 5.3712 | 0.6373 | 0.8179 | 0.6922 | 0.0 | 0.4024 | 0.6536 | 0.6664 | 0.8488 | 0.8991 | 0.0 | 0.89 | 0.904 | 0.6373 | 0.8991 |
| 4.6614 | 98.0 | 9800 | 5.2165 | 0.6599 | 0.857 | 0.7168 | 0.0 | 0.3705 | 0.6784 | 0.6891 | 0.8403 | 0.8882 | 0.0 | 0.86 | 0.894 | 0.6599 | 0.8882 |
| 4.6073 | 99.0 | 9900 | 5.2039 | 0.6479 | 0.8478 | 0.6905 | 0.0 | 0.4192 | 0.666 | 0.6825 | 0.8445 | 0.8877 | 0.0 | 0.83 | 0.895 | 0.6479 | 0.8877 |
| 4.5726 | 100.0 | 10000 | 5.2745 | 0.663 | 0.8686 | 0.7038 | 0.0 | 0.4939 | 0.6774 | 0.6886 | 0.8431 | 0.8848 | 0.0 | 0.85 | 0.891 | 0.663 | 0.8848 |
| 4.5516 | 101.0 | 10100 | 5.1948 | 0.6682 | 0.8786 | 0.7233 | 0.0 | 0.5113 | 0.6813 | 0.7066 | 0.8389 | 0.8863 | 0.0 | 0.87 | 0.8915 | 0.6682 | 0.8863 |
| 4.5178 | 102.0 | 10200 | 5.0016 | 0.6808 | 0.8885 | 0.7253 | 0.0 | 0.4145 | 0.6975 | 0.6991 | 0.8398 | 0.8882 | 0.0 | 0.88 | 0.893 | 0.6808 | 0.8882 |
| 4.554 | 103.0 | 10300 | 5.1041 | 0.6713 | 0.8764 | 0.7146 | 0.0 | 0.484 | 0.6839 | 0.7028 | 0.8365 | 0.8896 | 0.0 | 0.87 | 0.895 | 0.6713 | 0.8896 |
| 4.5047 | 104.0 | 10400 | 5.0536 | 0.6733 | 0.8881 | 0.7346 | 0.0 | 0.5161 | 0.6861 | 0.69 | 0.8313 | 0.8801 | 0.0 | 0.84 | 0.8865 | 0.6733 | 0.8801 |
| 4.4769 | 105.0 | 10500 | 5.0607 | 0.6881 | 0.9005 | 0.7435 | 0.0 | 0.5438 | 0.6998 | 0.7137 | 0.8389 | 0.882 | 0.0 | 0.82 | 0.8895 | 0.6881 | 0.882 |
| 4.4684 | 106.0 | 10600 | 5.0023 | 0.6854 | 0.8926 | 0.7468 | 0.0 | 0.5582 | 0.6987 | 0.718 | 0.8408 | 0.8953 | 0.0 | 0.82 | 0.9035 | 0.6854 | 0.8953 |
| 4.5203 | 107.0 | 10700 | 5.0049 | 0.6807 | 0.8826 | 0.7457 | 0.0 | 0.554 | 0.6952 | 0.7171 | 0.8645 | 0.8943 | 0.0 | 0.83 | 0.902 | 0.6807 | 0.8943 |
| 4.413 | 108.0 | 10800 | 4.9532 | 0.6846 | 0.8939 | 0.7492 | 0.0 | 0.5038 | 0.6991 | 0.718 | 0.8403 | 0.8915 | 0.0 | 0.81 | 0.9 | 0.6846 | 0.8915 |
| 4.4614 | 109.0 | 10900 | 4.9174 | 0.6956 | 0.8992 | 0.7565 | 0.0 | 0.4832 | 0.7107 | 0.7218 | 0.8427 | 0.8953 | 0.0 | 0.84 | 0.9025 | 0.6956 | 0.8953 |
| 4.4055 | 110.0 | 11000 | 5.0199 | 0.6802 | 0.8868 | 0.7409 | 0.0 | 0.4514 | 0.6963 | 0.7076 | 0.8346 | 0.8991 | 0.0 | 0.82 | 0.9075 | 0.6802 | 0.8991 |
| 4.3637 | 111.0 | 11100 | 4.9812 | 0.6814 | 0.8892 | 0.7393 | 0.0 | 0.5097 | 0.696 | 0.7171 | 0.8469 | 0.8919 | 0.0 | 0.79 | 0.9015 | 0.6814 | 0.8919 |
| 4.4024 | 112.0 | 11200 | 4.9323 | 0.6884 | 0.9042 | 0.7445 | 0.0 | 0.5403 | 0.7008 | 0.7133 | 0.845 | 0.8915 | 0.0 | 0.82 | 0.8995 | 0.6884 | 0.8915 |
| 4.3255 | 113.0 | 11300 | 4.8589 | 0.6922 | 0.9051 | 0.7643 | 0.0 | 0.5214 | 0.7066 | 0.7114 | 0.8417 | 0.8905 | 0.0 | 0.82 | 0.8985 | 0.6922 | 0.8905 |
| 4.3582 | 114.0 | 11400 | 4.9238 | 0.6879 | 0.9075 | 0.749 | 0.0 | 0.4796 | 0.7023 | 0.7147 | 0.8374 | 0.8877 | 0.0 | 0.81 | 0.896 | 0.6879 | 0.8877 |
| 4.287 | 115.0 | 11500 | 4.9863 | 0.6886 | 0.9018 | 0.7404 | 0.0 | 0.4966 | 0.703 | 0.7166 | 0.8408 | 0.8867 | 0.0 | 0.83 | 0.894 | 0.6886 | 0.8867 |
| 4.3159 | 116.0 | 11600 | 5.0250 | 0.6752 | 0.8889 | 0.7215 | 0.0 | 0.5872 | 0.6885 | 0.6948 | 0.8455 | 0.8948 | 0.0 | 0.82 | 0.903 | 0.6752 | 0.8948 |
| 4.3536 | 117.0 | 11700 | 4.8935 | 0.6867 | 0.8959 | 0.7413 | 0.0 | 0.4453 | 0.7024 | 0.7142 | 0.8398 | 0.8872 | 0.0 | 0.82 | 0.895 | 0.6867 | 0.8872 |
| 4.2804 | 118.0 | 11800 | 4.9059 | 0.6874 | 0.9026 | 0.7391 | 0.0 | 0.4907 | 0.701 | 0.7137 | 0.8403 | 0.8919 | 0.0 | 0.84 | 0.899 | 0.6874 | 0.8919 |
| 4.2833 | 119.0 | 11900 | 4.9931 | 0.679 | 0.8909 | 0.7321 | 0.0 | 0.5342 | 0.6925 | 0.7085 | 0.846 | 0.8953 | 0.0 | 0.84 | 0.9025 | 0.679 | 0.8953 |
| 4.294 | 120.0 | 12000 | 4.9152 | 0.6839 | 0.8974 | 0.7506 | 0.0 | 0.5001 | 0.6977 | 0.728 | 0.8469 | 0.8877 | 0.0 | 0.83 | 0.895 | 0.6839 | 0.8877 |
| 4.2876 | 121.0 | 12100 | 4.8559 | 0.6996 | 0.9114 | 0.768 | 0.0 | 0.4886 | 0.7135 | 0.7199 | 0.8393 | 0.8915 | 0.0 | 0.84 | 0.8985 | 0.6996 | 0.8915 |
| 4.3188 | 122.0 | 12200 | 5.0836 | 0.6661 | 0.8779 | 0.7081 | 0.0 | 0.5014 | 0.6801 | 0.6981 | 0.8403 | 0.8919 | 0.0 | 0.83 | 0.8995 | 0.6661 | 0.8919 |
| 4.2945 | 123.0 | 12300 | 4.9716 | 0.6945 | 0.9015 | 0.7443 | 0.0 | 0.5274 | 0.7081 | 0.7185 | 0.8436 | 0.8929 | 0.0 | 0.83 | 0.9005 | 0.6945 | 0.8929 |
| 4.2278 | 124.0 | 12400 | 4.9418 | 0.686 | 0.8984 | 0.7362 | 0.0 | 0.4882 | 0.6988 | 0.7204 | 0.8469 | 0.8867 | 0.0 | 0.83 | 0.894 | 0.686 | 0.8867 |
| 4.2593 | 125.0 | 12500 | 5.0332 | 0.686 | 0.9005 | 0.7381 | 0.0 | 0.5344 | 0.6971 | 0.7209 | 0.8408 | 0.8858 | 0.0 | 0.85 | 0.892 | 0.686 | 0.8858 |
| 4.2182 | 126.0 | 12600 | 4.9317 | 0.6852 | 0.8955 | 0.7404 | 0.0 | 0.5508 | 0.6976 | 0.7223 | 0.8441 | 0.8886 | 0.0 | 0.84 | 0.8955 | 0.6852 | 0.8886 |
| 4.2638 | 127.0 | 12700 | 5.0511 | 0.6708 | 0.8849 | 0.7274 | 0.0 | 0.5834 | 0.683 | 0.7123 | 0.8393 | 0.8886 | 0.0 | 0.85 | 0.895 | 0.6708 | 0.8886 |
| 4.1952 | 128.0 | 12800 | 5.0145 | 0.6722 | 0.889 | 0.7223 | 0.0 | 0.5595 | 0.6845 | 0.7052 | 0.8346 | 0.8825 | 0.0 | 0.84 | 0.889 | 0.6722 | 0.8825 |
| 4.2047 | 129.0 | 12900 | 4.9855 | 0.6842 | 0.8966 | 0.732 | 0.0 | 0.5326 | 0.6973 | 0.7175 | 0.8398 | 0.8806 | 0.0 | 0.84 | 0.887 | 0.6842 | 0.8806 |
| 4.215 | 130.0 | 13000 | 4.9706 | 0.6808 | 0.8958 | 0.7319 | 0.0 | 0.5291 | 0.6942 | 0.7114 | 0.8332 | 0.8877 | 0.0 | 0.83 | 0.895 | 0.6808 | 0.8877 |
| 4.2377 | 131.0 | 13100 | 4.9718 | 0.6839 | 0.898 | 0.7366 | 0.0 | 0.5537 | 0.6973 | 0.7104 | 0.8351 | 0.881 | 0.0 | 0.82 | 0.8885 | 0.6839 | 0.881 |
| 4.1521 | 132.0 | 13200 | 4.9658 | 0.6859 | 0.8964 | 0.745 | 0.0 | 0.5394 | 0.6986 | 0.7152 | 0.8384 | 0.8801 | 0.0 | 0.85 | 0.886 | 0.6859 | 0.8801 |
| 4.205 | 133.0 | 13300 | 4.9849 | 0.6792 | 0.8912 | 0.7382 | 0.0 | 0.4949 | 0.693 | 0.7081 | 0.8322 | 0.8839 | 0.0 | 0.83 | 0.891 | 0.6792 | 0.8839 |
| 4.1716 | 134.0 | 13400 | 4.9999 | 0.6804 | 0.8935 | 0.7385 | 0.0 | 0.4934 | 0.6937 | 0.7038 | 0.8408 | 0.8834 | 0.0 | 0.82 | 0.891 | 0.6804 | 0.8834 |
| 4.1556 | 135.0 | 13500 | 5.0120 | 0.6811 | 0.8936 | 0.7433 | 0.0 | 0.5101 | 0.6943 | 0.7019 | 0.8417 | 0.8815 | 0.0 | 0.82 | 0.889 | 0.6811 | 0.8815 |
| 4.1656 | 136.0 | 13600 | 5.0309 | 0.674 | 0.8867 | 0.7315 | 0.0 | 0.5147 | 0.6871 | 0.7066 | 0.8365 | 0.882 | 0.0 | 0.84 | 0.8885 | 0.674 | 0.882 |
| 4.1715 | 137.0 | 13700 | 4.9570 | 0.6795 | 0.8966 | 0.7313 | 0.0 | 0.4992 | 0.6932 | 0.7109 | 0.8313 | 0.8834 | 0.0 | 0.83 | 0.8905 | 0.6795 | 0.8834 |
| 4.1291 | 138.0 | 13800 | 4.9760 | 0.6796 | 0.8925 | 0.7315 | 0.0 | 0.511 | 0.6934 | 0.7161 | 0.8336 | 0.8872 | 0.0 | 0.84 | 0.894 | 0.6796 | 0.8872 |
| 4.1406 | 139.0 | 13900 | 4.9690 | 0.6851 | 0.8992 | 0.7308 | 0.0 | 0.5089 | 0.6988 | 0.7114 | 0.8294 | 0.8815 | 0.0 | 0.84 | 0.888 | 0.6851 | 0.8815 |
| 4.1242 | 140.0 | 14000 | 4.9718 | 0.6788 | 0.8946 | 0.7239 | 0.0 | 0.4992 | 0.6924 | 0.7038 | 0.8322 | 0.8829 | 0.0 | 0.84 | 0.8895 | 0.6788 | 0.8829 |
| 4.1337 | 141.0 | 14100 | 4.9832 | 0.6771 | 0.8921 | 0.7255 | 0.0 | 0.4862 | 0.6907 | 0.7095 | 0.8355 | 0.882 | 0.0 | 0.82 | 0.8895 | 0.6771 | 0.882 |
| 4.1766 | 142.0 | 14200 | 4.9979 | 0.6767 | 0.8893 | 0.7238 | 0.0 | 0.5086 | 0.6902 | 0.7147 | 0.8403 | 0.8834 | 0.0 | 0.84 | 0.89 | 0.6767 | 0.8834 |
| 4.158 | 143.0 | 14300 | 4.9638 | 0.6823 | 0.8977 | 0.7327 | 0.0 | 0.5184 | 0.6953 | 0.7104 | 0.8351 | 0.8863 | 0.0 | 0.84 | 0.893 | 0.6823 | 0.8863 |
| 4.1759 | 144.0 | 14400 | 4.9966 | 0.6756 | 0.8865 | 0.7228 | 0.0 | 0.5223 | 0.6885 | 0.7071 | 0.8384 | 0.8825 | 0.0 | 0.84 | 0.889 | 0.6756 | 0.8825 |
| 4.122 | 145.0 | 14500 | 4.9467 | 0.6819 | 0.8952 | 0.7302 | 0.0 | 0.5193 | 0.695 | 0.7152 | 0.8379 | 0.8858 | 0.0 | 0.83 | 0.893 | 0.6819 | 0.8858 |
| 4.1834 | 146.0 | 14600 | 4.9808 | 0.6808 | 0.895 | 0.7261 | 0.0 | 0.5239 | 0.6937 | 0.7057 | 0.8322 | 0.8867 | 0.0 | 0.83 | 0.894 | 0.6808 | 0.8867 |
| 4.1443 | 147.0 | 14700 | 4.9647 | 0.6848 | 0.9002 | 0.7371 | 0.0 | 0.5254 | 0.6979 | 0.7076 | 0.837 | 0.8825 | 0.0 | 0.84 | 0.889 | 0.6848 | 0.8825 |
| 4.1243 | 148.0 | 14800 | 5.0032 | 0.6754 | 0.8847 | 0.7296 | 0.0 | 0.525 | 0.6883 | 0.7104 | 0.8389 | 0.8844 | 0.0 | 0.84 | 0.891 | 0.6754 | 0.8844 |
| 4.1306 | 149.0 | 14900 | 4.9693 | 0.6777 | 0.8919 | 0.7318 | 0.0 | 0.5181 | 0.6902 | 0.7118 | 0.8336 | 0.8848 | 0.0 | 0.84 | 0.8915 | 0.6777 | 0.8848 |
| 4.1316 | 150.0 | 15000 | 5.0121 | 0.6732 | 0.8882 | 0.7185 | 0.0 | 0.5121 | 0.6864 | 0.7137 | 0.8332 | 0.8834 | 0.0 | 0.84 | 0.89 | 0.6732 | 0.8834 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
FlameF0X/SnowflakeCore-G0-Release-3-1B | FlameF0X | 2025-06-16T07:26:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T19:54:04Z | ---
license: apache-2.0
---
# 🔬 SnowflakeCore-G0-Release-3 Architecture Size Report
## Summary
This document provides a detailed breakdown of the parameter count and structural design of the **SnowflakeCore-G0-Release-3** model. SnowflakeCore-G0-Release-3 is a custom decoder-only transformer model built from scratch, designed for autoregressive language modeling with rotary positional embeddings (RoPE).
---
## 📐 Model Architecture Overview
| Component | Value |
|------------------------|-----------------|
| Architecture Type | Decoder-only Transformer |
| Hidden Size (d_model) | 1536 |
| Number of Layers | 32 |
| Attention Heads | 16 |
| Feedforward Dim (d_ff) | 6144 |
| Max Sequence Length | 2048 |
| Positional Encoding | Rotary (RoPE) |
| Vocabulary Size | 50,000 (assumed)|
| Total Parameters | **≈ 1.06 Billion** |
---
## 🧮 Parameter Count Breakdown
### 1. Embedding Layers
- **Token Embedding**: V × d = 50,000 × 1536 = 76.8M
- **Output Projection**: d × V = 1536 × 50,000 = 76.8M
**Total**:
```
P_embedding = 2 · 1536 · 50,000 = 153.6M
```
---
### 2. Transformer Blocks
Each of the 32 layers contains:
- **Multi-Head Attention (Q, K, V, Out)**:
4 · d² = 4 · 1536² = 9.44M
- **Feedforward Network (MLP)**:
2 · d · d_ff = 2 · 1536 · 6144 = 18.87M
- **Total per Layer**:
```
9.44M + 18.87M = 28.31M
```
- **Total across 32 layers**:
```
32 · 28.31M = 905.97M
```
---
### 3. Positional Embedding
- **Type**: Rotary Positional Embeddings (RoPE)
- **Parameter Count**: **0** (non-learned, sinusoidal basis)
---
## 📊 Final Parameter Estimate
```
Total Parameters ≈ P_embedding + P_transformer = 153.6M + 905.97M = 1,059.6M
```
---
## 🧠 Training Regime (Contextual)
| Item | Value |
|-----------------------------|-------------------|
| Training Dataset Size | ~2 million rows |
| Max Tokens per Sequence | 2048 |
| Effective Batch Size | 32 × 4 = 128 |
| Number of Epochs | 15 |
| Optimizer | AdamW |
| Learning Rate | 3 × 10⁻⁴ |
Approximate number of tokens:
```
2M × avg_tokens_per_row ≤ 4B tokens
```
---
## 🧾 Notes
- SnowflakeCore-G0-Release-3 exceeds the size of GPT-2 Large (~774M parameters).
- With RoPE and 32 layers, the model is well-positioned for long-range generalization.
- This parameter size is consistent with the compute-optimal design frontier for mid-scale language models.
---
## 📦 Conclusion
**SnowflakeCore-G0-Release-3** is a rigorously engineered, 1.06B parameter language model with modern architectural choices (RoPE, deep stack, wide FFN) that position it as a strong open foundation model for further research, deployment, and extension. |
DLxiaoying/distilbert-base-uncased-finetuned-clinc | DLxiaoying | 2025-06-16T07:25:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T06:30:52Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8063
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3296 | 1.0 | 318 | 3.3392 | 0.7313 |
| 2.6885 | 2.0 | 636 | 1.9295 | 0.8465 |
| 1.6035 | 3.0 | 954 | 1.2026 | 0.8965 |
| 1.0561 | 4.0 | 1272 | 0.8956 | 0.9113 |
| 0.8334 | 5.0 | 1590 | 0.8063 | 0.9161 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
NAMAA-Space/bojji | NAMAA-Space | 2025-06-16T07:25:58Z | 22 | 2 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"ar",
"license:mit",
"region:us"
] | null | 2025-06-09T16:48:15Z | ---
library_name: model2vec
license: mit
model_name: Abdelkareem/bojji
tags:
- embeddings
- static-embeddings
- sentence-transformers
language:
- ar
---
# Bojji: Arabic Static Embedding Model

**Bojji** is a lightweight and fast Arabic embedding model built using the [Model2Vec](https://github.com/MinishLab/model2vec) distillation framework. It distills semantic knowledge from large transformer models into compact static embeddings, enabling ultra-fast inference with minimal resource usage.
---
## 🚀 Why Bojji?
- ⚡ **Extreme Speed**: Up to **500x faster** inference compared to Sentence Transformers.
- 📦 **Compact Size**: As much as **50x smaller**, making it ideal for edge and mobile deployments.
- 💻 **CPU & GPU Friendly**: Designed for fast execution even on CPU-only systems.
- 🧠 **Arabic-Tuned**: Specifically trained for Arabic language tasks with high-quality results.
- 🔧 **Versatile**: Suitable for tasks like semantic search, classification, clustering, deduplication, and more.
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f7a71dd993507cfcbe587/3JEnPfgF2BfbN5H81K0XD.png" alt="Speed vs Performance Chart" width="700"/>
</p>
---
## 🧪 What is Model2Vec?
[Model2Vec](https://github.com/MinishLab/model2vec) is a distillation framework that converts large sentence transformers into **static embedding models**. These static models retain the semantic quality of transformers while offering massive improvements in speed and memory efficiency.
---
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
### Using Model2Vec
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("NAMAA-Space/bojji")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Using Sentence Transformers
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
```python
from sentence_transformers import SentenceTransformer
# Load a pretrained Sentence Transformer model
model = SentenceTransformer("NAMAA-Space/bojji")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
## How it Works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx). During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Benchmark on Arabic
## Speed
| Model | Speed (sentences/second) | Device |
|---------------------------------------|--------------------------|--------|
| zarra | 26893.63 | cpu |
| bojji | 27478.15 | cpu |
| potion-multilingual-128M | 27145.31 | cpu |
| paraphrase-multilingual-MiniLM-L12-v2 | 2363.24 | cuda |
| silma_ai_embedding_sts_v0.1 | 627.13 | cuda |
| muffakir_embedding | 621.77 | cuda |
| get_multilingual_base | 895.41 | cuda |
| arabic_retrieval_v1.0 | 618.56 | cuda |
| arabic_triplet_matryoshka_v2 | 610.64 | cuda |
- Zarra and Bojji excel in speed, achieving 26893.63 and 27478.15 sentences per second on CPU, respectively, far surpassing CUDA-based models like arabic_triplet_matryoshka_v2 (610.64).
- Top Performer: Bojji is the fastest model, slightly ahead of Zarra and potion-multilingual-128M (27145.31), highlighting the efficiency of Model2Vec-based models on CPU.
- Key Observation: The high speed of Zarra and Bojji on CPU makes them ideal for resource-constrained environments, offering significant advantages over CUDA-dependent models.
## Size of the Model
| Model | Parameters (M) | Size (MB) | Relative to Largest (%) | Less than Largest (x) |
|----------------------------------|----------------|-----------|-------------------------|-----------------------|
| zarra | 64.00 | 244.14 | 41.92 | 2.39 |
| bojji | 124.88 | 476.40 | 81.79 | 1.22 |
| potion-multilingual-128M | 128.09 | 488.63 | 83.89 | 1.19 |
| paraphrase-multilingual-MiniLM-… | 117.65 | 448.82 | 77.06 | 1.30 |
| silma_ai_embedding_sts_v0.1 | 135.19 | 515.72 | 88.54 | 1.13 |
| muffakir_embedding | 135.19 | 515.72 | 88.54 | 1.13 |
| arabic_retrieval_v1.0 | 135.19 | 515.73 | 88.54 | 1.13 |
| arabic_triplet_matryoshka_v2 | 135.19 | 515.72 | 88.54 | 1.13 |
| get_multilingual_base | 305.37 | 582.45 | 100.00 | 1.00 |
- Zarra is the smallest model, with only 64 million parameters and 244.14 MB in size, making it 2.39 times smaller than the largest model (get_multilingual_base).
- Bojji is slightly larger at 124.88 million parameters and 476.40 MB, but still significantly smaller than most other models.
- Top Performer: Zarra leads in compactness, offering the smallest footprint, which is critical for deployment on resource-limited devices.
- Key Observation: The compact size of Zarra and Bojji aligns with their design goal of efficiency, making them highly suitable for edge computing and real-time applications.
| Model | Avg | MIRAC | MLQAR | Massi | Multi | STS17 | STS22 | XNLI_ |
|---------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|
| arabic_triplet_matryoshka_v2 | 0.6610 | 0.6262 | 0.5093 | 0.5577 | 0.5868 | 0.8531 | 0.6396 | 0.8542 |
| muffakir_embedding | 0.6494 | 0.6424 | 0.5267 | 0.5462 | 0.5943 | 0.8485 | 0.6291 | 0.7583 |
| arabic_retrieval_v1.0 | 0.6473 | 0.6159 | 0.5674 | 0.5832 | 0.5993 | 0.8002 | 0.6254 | 0.7393 |
| gate_arabert-v1 | 0.6444 | 0.5774 | 0.4808 | 0.5345 | 0.5847 | 0.8278 | 0.6310 | 0.8746 |
| get_multilingual_base | 0.6440 | 0.7177 | 0.5698 | 0.5071 | 0.5521 | 0.7881 | 0.6145 | 0.7584 |
| arabic_sts_matryoshka | 0.6413 | 0.5828 | 0.4840 | 0.5457 | 0.5494 | 0.8290 | 0.6242 | 0.8740 |
| silma_ai_embedding_sts_v0.1 | 0.6138 | 0.3799 | 0.5011 | 0.5600 | 0.5749 | 0.8559 | 0.6122 | 0.8125 |
| Arabic-MiniLM-L12-v2-all-nli-triplet | 0.5431 | 0.2240 | 0.3612 | 0.4775 | 0.5698 | 0.8111 | 0.5540 | 0.8043 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.5208 | 0.2191 | 0.3496 | 0.4515 | 0.5573 | 0.7916 | 0.4908 | 0.7859 |
| bojji | 0.5177 | 0.2941 | 0.3989 | 0.4667 | 0.5433 | 0.7233 | 0.5880 | 0.6094 |
| zarra | 0.4822 | 0.2295 | 0.3473 | 0.4119 | 0.5237 | 0.6469 | 0.6218 | 0.5942 |
| potion-multilingual-128M | 0.4699 | 0.1658 | 0.3150 | 0.4285 | 0.5338 | 0.6511 | 0.5951 | 0.5999 |
| all_minilm_l6_v2 | 0.2843 | 0.0005 | 0.0064 | 0.1905 | 0.4934 | 0.5089 | 0.2518 | 0.5384 |
### Sorted by STS17_main (Score)
| Model Name | STS17_main |
|---------------------------------------|------------|
| silma_ai_embedding_sts_v0.1 | 0.856 |
| arabic_triplet_matryoshka_v2 | 0.853 |
| muffakir_embedding | 0.849 |
| arabic_sts_matryoshka | 0.829 |
| gate_arabert-v1 | 0.828 |
| Arabic-MiniLM-L12-v2-all-nli-triplet | 0.811 |
| arabic_retrieval_v1.0 | 0.800 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.792 |
| get_multilingual_base | 0.788 |
| bojji | 0.723 |
| potion-multilingual-128M | 0.651 |
| zarra | 0.647 |
| all_minilm_l6_v2 | 0.509 |
### Sorted by STS22.v2_main (Score)
| Model Name | STS22.v2_main |
|---------------------------------------|---------------|
| arabic_triplet_matryoshka_v2 | 0.640 |
| gate_arabert-v1 | 0.631 |
| muffakir_embedding | 0.629 |
| arabic_retrieval_v1.0 | 0.625 |
| arabic_sts_matryoshka | 0.624 |
| zarra | 0.622 |
| get_multilingual_base | 0.615 |
| silma_ai_embedding_sts_v0.1 | 0.612 |
| potion-multilingual-128M | 0.595 |
| bojji | 0.588 |
| Arabic-MiniLM-L12-v2-all-nli-triplet | 0.554 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.491 |
| all_minilm_l6_v2 | 0.252 |
## Additional Resources
- [Zarra & Bojji Blog](https://kareemai.com/blog/posts/minishlab/blog_zaraah.html)
- [NAMAA Collection](https://huggingface.co/collections/NAMAA-Space/zaraah-683f1f8a1eec1ec8f2badee5)
- [MinishLab](https://minishlab.github.io/)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec) |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.5_epoch2 | MinaMila | 2025-06-16T07:24:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:22:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF | Triangle104 | 2025-06-16T07:22:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:18:55Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_M-GGUF --hf-file q3-8b-kintsugi-q5_k_m.gguf -c 2048
```
|
himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_064351 | himedia | 2025-06-16T07:20:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:46:15Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** himedia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF | Triangle104 | 2025-06-16T07:17:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:15:07Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -c 2048
```
|
Sumail/Eurus9 | Sumail | 2025-06-16T07:15:20Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-22T09:24:47Z | ---
base_model:
- itorgov/model-1723976476
- itorgov/model-1723975614
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [itorgov/model-1723976476](https://huggingface.co/itorgov/model-1723976476)
* [itorgov/model-1723975614](https://huggingface.co/itorgov/model-1723975614)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: itorgov/model-1723975614
layer_range: [0, 48]
- model: itorgov/model-1723976476
layer_range: [0, 48]
merge_method: slerp
base_model: itorgov/model-1723975614
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.5_epoch2 | MinaMila | 2025-06-16T07:14:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:12:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katiecloudda/tuned-emoji | katiecloudda | 2025-06-16T07:11:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:10:29Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** katiecloudda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AndyBlocker/ViStream | AndyBlocker | 2025-06-16T07:10:28Z | 0 | 0 | pytorch | [
"pytorch",
"computer-vision",
"object-tracking",
"spiking-neural-networks",
"visual-streaming-perception",
"energy-efficient",
"cvpr-2025",
"object-detection",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2025-06-16T05:52:58Z | ---
license: cc-by-4.0
library_name: pytorch
tags:
- computer-vision
- object-tracking
- spiking-neural-networks
- visual-streaming-perception
- energy-efficient
- cvpr-2025
pipeline_tag: object-detection
---
# ViStream: Law-of-Charge-Conservation Inspired Spiking Neural Network for Visual Streaming Perception
**ViStream** is a novel energy-efficient framework for Visual Streaming Perception (VSP) that leverages Spiking Neural Networks (SNNs) with Law of Charge Conservation (LoCC) properties.
## Model Details
### Model Description
- **Developed by:** Kang You, Ziling Wei, Jing Yan, Boning Zhang, Qinghai Guo, Yaoyu Zhang, Zhezhi He
- **Model type:** Spiking Neural Network for Visual Streaming Perception
- **Language(s):** PyTorch implementation
- **License:** CC-BY-4.0
- **Paper:** [CVPR 2025](https://openaccess.thecvf.com/content/CVPR2025/papers/You_VISTREAM_Improving_Computation_Efficiency_of_Visual_Streaming_Perception_via_Law-of-Charge-Conservation_CVPR_2025_paper.pdf)
- **Repository:** [GitHub](https://github.com/Intelligent-Computing-Research-Group/ViStream)
### Model Architecture
ViStream introduces two key innovations:
1. **Law of Charge Conservation (LoCC)** property in ST-BIF neurons
2. **Differential Encoding (DiffEncode)** scheme for temporal optimization
The framework achieves significant computational reduction while maintaining accuracy equivalent to ANN counterparts.
## Uses
### Direct Use
ViStream can be directly used for:
- **Multiple Object Tracking (MOT)**
- **Single Object Tracking (SOT)**
- **Video Object Segmentation (VOS)**
- **Multiple Object Tracking and Segmentation (MOTS)**
- **Pose Tracking**
### Downstream Use
The model can be fine-tuned for various visual streaming perception tasks in:
- Autonomous driving
- UAV navigation
- AR/VR applications
- Real-time surveillance
## Bias, Risks, and Limitations
### Limitations
- Requires specific hardware optimization for maximum energy benefits
- Performance may vary with different frame rates
- Limited to visual perception tasks
### Recommendations
- Test thoroughly on target hardware before deployment
- Consider computational constraints of edge devices
- Validate performance on domain-specific datasets
## How to Get Started with the Model
```python
from huggingface_hub import hf_hub_download
import torch
# Download the checkpoint
checkpoint_path = hf_hub_download(
repo_id="AndyBlocker/ViStream",
filename="checkpoint-90.pth"
)
# Load the model (requires ViStream implementation)
checkpoint = torch.load(checkpoint_path, map_location='cpu')
```
For complete usage examples, see the [GitHub repository](https://github.com/Intelligent-Computing-Research-Group/ViStream).
## Training Details
### Training Data
The model was trained on multiple datasets for various visual streaming perception tasks including object tracking, video object segmentation, and pose tracking.
### Training Procedure
**Training Details:**
- Framework: PyTorch
- Optimization: Energy-efficient SNN training with Law of Charge Conservation
- Architecture: ResNet-based backbone with spike quantization layers
## Evaluation
The model demonstrates competitive performance across multiple visual streaming perception tasks while achieving significant energy efficiency improvements compared to traditional ANN-based approaches. Detailed evaluation results are available in the [CVPR 2025 paper](https://openaccess.thecvf.com/content/CVPR2025/papers/You_VISTREAM_Improving_Computation_Efficiency_of_Visual_Streaming_Perception_via_Law-of-Charge-Conservation_CVPR_2025_paper.pdf).
## Model Card Authors
Kang You, Ziling Wei, Jing Yan, Boning Zhang, Qinghai Guo, Yaoyu Zhang, Zhezhi He
## Model Card Contact
For questions about this model, please open an issue in the [GitHub repository](https://github.com/Intelligent-Computing-Research-Group/ViStream).
## Citation
```bibtex
@inproceedings{you2025vistream,
title={VISTREAM: Improving Computation Efficiency of Visual Streaming Perception via Law-of-Charge-Conservation Inspired Spiking Neural Network},
author={You, Kang and Wei, Ziling and Yan, Jing and Zhang, Boning and Guo, Qinghai and Zhang, Yaoyu and He, Zhezhi},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={8796--8805},
year={2025}
}
``` |
AlicanKiraz0/SenecaLLM_x_gemma27b-v2 | AlicanKiraz0 | 2025-06-16T07:08:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T07:03:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NICFRU/nc_ner_bert_model_german_alle_ner_tags | NICFRU | 2025-06-16T07:08:17Z | 0 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-06-16T07:05:43Z | # nc_ner_bert_model
This model is a fine-tuned version of bert-base-german-cased on the german-ler dataset.
It achieves the following results on the evaluation set:
Loss: 0.011337515898048878
F1: 0.9723312768741821
Precision: 0.9669083472764455
Recall: 0.9778153788306072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- num_epochs: 3
### Training results
| loss | epoch | step | eval_loss | eval_f1 | eval_precision | eval_recall |
| --- | --- | --- | --- | --- | --- | --- |
| 0.7592 | 0.029958058717795086 | 50 | nan | nan | nan | nan |
| 0.1908 | 0.05991611743559017 | 100 | nan | nan | nan | nan |
| 0.1288 | 0.08987417615338526 | 150 | nan | nan | nan | nan |
| 0.0877 | 0.11983223487118035 | 200 | nan | nan | nan | nan |
| nan | 0.11983223487118035 | 200 | 0.07891597598791122 | 0.71008601947254 | 0.6772448611611973 | 0.7462745877210412 |
| 0.0782 | 0.14979029358897544 | 250 | nan | nan | nan | nan |
| 0.071 | 0.17974835230677053 | 300 | nan | nan | nan | nan |
| 0.0605 | 0.2097064110245656 | 350 | nan | nan | nan | nan |
| 0.0491 | 0.2396644697423607 | 400 | nan | nan | nan | nan |
| nan | 0.2396644697423607 | 400 | 0.05076289176940918 | 0.799463190184049 | 0.7723652528245971 | 0.828531690840453 |
| 0.0471 | 0.2696225284601558 | 450 | nan | nan | nan | nan |
| 0.0463 | 0.2995805871779509 | 500 | nan | nan | nan | nan |
| 0.0544 | 0.32953864589574594 | 550 | nan | nan | nan | nan |
| 0.0355 | 0.35949670461354105 | 600 | nan | nan | nan | nan |
| nan | 0.35949670461354105 | 600 | 0.03874693065881729 | 0.8478779208392943 | 0.8152971386647102 | 0.8831710709318498 |
| 0.0479 | 0.3894547633313361 | 650 | nan | nan | nan | nan |
| 0.0343 | 0.4194128220491312 | 700 | nan | nan | nan | nan |
| 0.033 | 0.44937088076692633 | 750 | nan | nan | nan | nan |
| 0.0367 | 0.4793289394847214 | 800 | nan | nan | nan | nan |
| nan | 0.4793289394847214 | 800 | 0.03735911101102829 | 0.8572247706422018 | 0.8258147670778863 | 0.891118617126962 |
| 0.0343 | 0.5092869982025164 | 850 | nan | nan | nan | nan |
| 0.0361 | 0.5392450569203115 | 900 | nan | nan | nan | nan |
| 0.0408 | 0.5692031156381067 | 950 | nan | nan | nan | nan |
| 0.0315 | 0.5991611743559018 | 1000 | nan | nan | nan | nan |
| nan | 0.5991611743559018 | 1000 | 0.030343208461999893 | 0.8759884281581485 | 0.8510399100618324 | 0.902443870454997 |
| 0.036 | 0.6291192330736968 | 1050 | nan | nan | nan | nan |
| 0.0225 | 0.6590772917914919 | 1100 | nan | nan | nan | nan |
| 0.0275 | 0.689035350509287 | 1150 | nan | nan | nan | nan |
| 0.0296 | 0.7189934092270821 | 1200 | nan | nan | nan | nan |
| nan | 0.7189934092270821 | 1200 | 0.03183047100901604 | 0.8875096974398758 | 0.866830839174086 | 0.9091992847208424 |
| 0.04 | 0.7489514679448772 | 1250 | nan | nan | nan | nan |
| 0.0281 | 0.7789095266626722 | 1300 | nan | nan | nan | nan |
| 0.0433 | 0.8088675853804673 | 1350 | nan | nan | nan | nan |
| 0.0299 | 0.8388256440982624 | 1400 | nan | nan | nan | nan |
| nan | 0.8388256440982624 | 1400 | 0.027965761721134186 | 0.8886951467596629 | 0.8671077504725898 | 0.9113848599244984 |
| 0.0381 | 0.8687837028160575 | 1450 | nan | nan | nan | nan |
| 0.0298 | 0.8987417615338527 | 1500 | nan | nan | nan | nan |
| 0.0278 | 0.9286998202516477 | 1550 | nan | nan | nan | nan |
| 0.0261 | 0.9586578789694428 | 1600 | nan | nan | nan | nan |
| nan | 0.9586578789694428 | 1600 | 0.028123166412115097 | 0.9013783731314309 | 0.8811918770165117 | 0.9225114245976554 |
| 0.0262 | 0.9886159376872379 | 1650 | nan | nan | nan | nan |
| 0.0178 | 1.0185739964050329 | 1700 | nan | nan | nan | nan |
| 0.0132 | 1.048532055122828 | 1750 | nan | nan | nan | nan |
| 0.0157 | 1.078490113840623 | 1800 | nan | nan | nan | nan |
| nan | 1.078490113840623 | 1800 | 0.028561240062117577 | 0.9077876791941109 | 0.8856548856548857 | 0.9310550367574012 |
| 0.0132 | 1.1084481725584183 | 1850 | nan | nan | nan | nan |
| 0.0155 | 1.1384062312762133 | 1900 | nan | nan | nan | nan |
| 0.0129 | 1.1683642899940083 | 1950 | nan | nan | nan | nan |
| 0.0148 | 1.1983223487118035 | 2000 | nan | nan | nan | nan |
| nan | 1.1983223487118035 | 2000 | 0.026516983285546303 | 0.9101913315111284 | 0.8946459412780656 | 0.9262865090403338 |
| 0.0106 | 1.2282804074295985 | 2050 | nan | nan | nan | nan |
| 0.0108 | 1.2582384661473935 | 2100 | nan | nan | nan | nan |
| 0.0165 | 1.2881965248651888 | 2150 | nan | nan | nan | nan |
| 0.0167 | 1.3181545835829838 | 2200 | nan | nan | nan | nan |
| nan | 1.3181545835829838 | 2200 | 0.02606791816651821 | 0.9090380703283929 | 0.8869565217391304 | 0.932247168686668 |
| 0.0151 | 1.348112642300779 | 2250 | nan | nan | nan | nan |
| 0.0169 | 1.378070701018574 | 2300 | nan | nan | nan | nan |
| 0.0165 | 1.4080287597363692 | 2350 | nan | nan | nan | nan |
| 0.0127 | 1.4379868184541642 | 2400 | nan | nan | nan | nan |
| nan | 1.4379868184541642 | 2400 | 0.02857920527458191 | 0.9142130490071408 | 0.9003853564547206 | 0.9284720842439896 |
| 0.0131 | 1.4679448771719592 | 2450 | nan | nan | nan | nan |
| 0.0147 | 1.4979029358897544 | 2500 | nan | nan | nan | nan |
| 0.0131 | 1.5278609946075494 | 2550 | nan | nan | nan | nan |
| 0.0116 | 1.5578190533253444 | 2600 | nan | nan | nan | nan |
| nan | 1.5578190533253444 | 2600 | 0.0249184537678957 | 0.9242824958370065 | 0.9115146831530139 | 0.9374130737134909 |
| 0.0166 | 1.5877771120431396 | 2650 | nan | nan | nan | nan |
| 0.0145 | 1.6177351707609346 | 2700 | nan | nan | nan | nan |
| 0.0152 | 1.6476932294787296 | 2750 | nan | nan | nan | nan |
| 0.0119 | 1.6776512881965249 | 2800 | nan | nan | nan | nan |
| nan | 1.6776512881965249 | 2800 | 0.024047361686825752 | 0.9198402649264634 | 0.9021780664883454 | 0.9382078283330022 |
| 0.0157 | 1.70760934691432 | 2850 | nan | nan | nan | nan |
| 0.0139 | 1.737567405632115 | 2900 | nan | nan | nan | nan |
| 0.0161 | 1.76752546434991 | 2950 | nan | nan | nan | nan |
| 0.0118 | 1.7974835230677053 | 3000 | nan | nan | nan | nan |
| nan | 1.7974835230677053 | 3000 | 0.02312026545405388 | 0.9283470749901845 | 0.9173617846750728 | 0.9395986489171468 |
| 0.0097 | 1.8274415817855003 | 3050 | nan | nan | nan | nan |
| 0.016 | 1.8573996405032953 | 3100 | nan | nan | nan | nan |
| 0.013 | 1.8873576992210905 | 3150 | nan | nan | nan | nan |
| 0.0133 | 1.9173157579388855 | 3200 | nan | nan | nan | nan |
| nan | 1.9173157579388855 | 3200 | 0.023281875997781754 | 0.9217849819353578 | 0.9062980030721967 | 0.9378104510232466 |
| 0.0175 | 1.9472738166566805 | 3250 | nan | nan | nan | nan |
| 0.0154 | 1.9772318753744758 | 3300 | nan | nan | nan | nan |
| 0.0096 | 2.007189934092271 | 3350 | nan | nan | nan | nan |
| 0.0057 | 2.0371479928100658 | 3400 | nan | nan | nan | nan |
| nan | 2.0371479928100658 | 3400 | 0.023734014481306076 | 0.9255650818394388 | 0.9080481743452494 | 0.9437711106695807 |
| 0.0083 | 2.067106051527861 | 3450 | nan | nan | nan | nan |
| 0.0081 | 2.097064110245656 | 3500 | nan | nan | nan | nan |
| 0.0075 | 2.127022168963451 | 3550 | nan | nan | nan | nan |
| 0.0057 | 2.156980227681246 | 3600 | nan | nan | nan | nan |
| nan | 2.156980227681246 | 3600 | 0.025288647040724754 | 0.9282016215688189 | 0.9129515757109915 | 0.9439697993244586 |
| 0.0049 | 2.1869382863990414 | 3650 | nan | nan | nan | nan |
| 0.0076 | 2.2168963451168366 | 3700 | nan | nan | nan | nan |
| 0.0067 | 2.2468544038346314 | 3750 | nan | nan | nan | nan |
| 0.009 | 2.2768124625524266 | 3800 | nan | nan | nan | nan |
| nan | 2.2768124625524266 | 3800 | 0.025000886991620064 | 0.926643935703848 | 0.9090214067278287 | 0.9449632425988476 |
| 0.0067 | 2.306770521270222 | 3850 | nan | nan | nan | nan |
| 0.0081 | 2.3367285799880166 | 3900 | nan | nan | nan | nan |
| 0.0073 | 2.366686638705812 | 3950 | nan | nan | nan | nan |
| 0.0059 | 2.396644697423607 | 4000 | nan | nan | nan | nan |
| nan | 2.396644697423607 | 4000 | 0.024387583136558533 | 0.9357906087638466 | 0.9235681114551083 | 0.9483409497317703 |
| 0.0073 | 2.426602756141402 | 4050 | nan | nan | nan | nan |
| 0.0075 | 2.456560814859197 | 4100 | nan | nan | nan | nan |
| 0.0052 | 2.4865188735769923 | 4150 | nan | nan | nan | nan |
| 0.006 | 2.516476932294787 | 4200 | nan | nan | nan | nan |
| nan | 2.516476932294787 | 4200 | 0.024684011936187744 | 0.9356576241308392 | 0.9225569718037853 | 0.9491357043512816 |
| 0.0088 | 2.5464349910125823 | 4250 | nan | nan | nan | nan |
| 0.0122 | 2.5763930497303775 | 4300 | nan | nan | nan | nan |
| 0.008 | 2.6063511084481723 | 4350 | nan | nan | nan | nan |
| 0.0072 | 2.6363091671659675 | 4400 | nan | nan | nan | nan |
| nan | 2.6363091671659675 | 4400 | 0.02404804341495037 | 0.9307684796406601 | 0.9151305683563749 | 0.9469501291476257 |
| 0.0075 | 2.6662672258837627 | 4450 | nan | nan | nan | nan |
| 0.0072 | 2.696225284601558 | 4500 | nan | nan | nan | nan |
| 0.0061 | 2.7261833433193527 | 4550 | nan | nan | nan | nan |
| 0.0057 | 2.756141402037148 | 4600 | nan | nan | nan | nan |
| nan | 2.756141402037148 | 4600 | 0.026269957423210144 | 0.9321470473210794 | 0.9176130895091434 | 0.9471488178025035 |
| 0.0038 | 2.786099460754943 | 4650 | nan | nan | nan | nan |
| 0.008 | 2.8160575194727384 | 4700 | nan | nan | nan | nan |
| 0.0073 | 2.846015578190533 | 4750 | nan | nan | nan | nan |
| 0.0065 | 2.8759736369083284 | 4800 | nan | nan | nan | nan |
| nan | 2.8759736369083284 | 4800 | 0.02427930384874344 | 0.9350088356567839 | 0.9241218707549 | 0.9461553745281145 |
| 0.0056 | 2.9059316956261236 | 4850 | nan | nan | nan | nan |
| 0.0074 | 2.9358897543439184 | 4900 | nan | nan | nan | nan |
| 0.0059 | 2.9658478130617136 | 4950 | nan | nan | nan | nan |
| 0.006 | 2.995805871779509 | 5000 | nan | nan | nan | nan |
| nan | 2.995805871779509 | 5000 | 0.025616737082600594 | 0.9314559499364427 | 0.9170196380438969 | 0.9463540631829922 |
| 0.004 | 3.0257639304973036 | 5050 | nan | nan | nan | nan |
| 0.0047 | 3.055721989215099 | 5100 | nan | nan | nan | nan |
| 0.0026 | 3.085680047932894 | 5150 | nan | nan | nan | nan |
| 0.0047 | 3.115638106650689 | 5200 | nan | nan | nan | nan |
| nan | 3.115638106650689 | 5200 | 0.02595394104719162 | 0.935866053069617 | 0.9225868725868726 | 0.9495330816610371 |
| 0.0036 | 3.145596165368484 | 5250 | nan | nan | nan | nan |
| 0.0035 | 3.1755542240862793 | 5300 | nan | nan | nan | nan |
| 0.0034 | 3.205512282804074 | 5350 | nan | nan | nan | nan |
| 0.0025 | 3.2354703415218693 | 5400 | nan | nan | nan | nan |
| nan | 3.2354703415218693 | 5400 | 0.02661316469311714 | 0.9393134651322907 | 0.9300740163615115 | 0.9487383270415259 |
| 0.0041 | 3.2654284002396645 | 5450 | nan | nan | nan | nan |
| 0.0033 | 3.2953864589574597 | 5500 | nan | nan | nan | nan |
| 0.0042 | 3.3253445176752545 | 5550 | nan | nan | nan | nan |
| 0.0033 | 3.3553025763930497 | 5600 | nan | nan | nan | nan |
| nan | 3.3553025763930497 | 5600 | 0.02527858316898346 | 0.9387915764613265 | 0.9300058490933906 | 0.947744883767137 |
| 0.0016 | 3.385260635110845 | 5650 | nan | nan | nan | nan |
| 0.0027 | 3.4152186938286397 | 5700 | nan | nan | nan | nan |
| 0.0032 | 3.445176752546435 | 5750 | nan | nan | nan | nan |
| 0.0054 | 3.47513481126423 | 5800 | nan | nan | nan | nan |
| nan | 3.47513481126423 | 5800 | 0.026085887104272842 | 0.9393850083505256 | 0.9290711232024874 | 0.9499304589707928 |
| 0.0041 | 3.5050928699820254 | 5850 | nan | nan | nan | nan |
| 0.0031 | 3.53505092869982 | 5900 | nan | nan | nan | nan |
| 0.0043 | 3.5650089874176154 | 5950 | nan | nan | nan | nan |
| 0.0036 | 3.5949670461354106 | 6000 | nan | nan | nan | nan |
| nan | 3.5949670461354106 | 6000 | 0.026285560801625252 | 0.9379647749510763 | 0.9240408714092925 | 0.9523147228293265 |
| 0.003 | 3.6249251048532054 | 6050 | nan | nan | nan | nan |
| 0.0036 | 3.6548831635710006 | 6100 | nan | nan | nan | nan |
| 0.003 | 3.684841222288796 | 6150 | nan | nan | nan | nan |
| 0.0018 | 3.7147992810065906 | 6200 | nan | nan | nan | nan |
| nan | 3.7147992810065906 | 6200 | 0.026966776698827744 | 0.9423455332546242 | 0.9333463262521926 | 0.9515199682098152 |
| 0.002 | 3.744757339724386 | 6250 | nan | nan | nan | nan |
| 0.0031 | 3.774715398442181 | 6300 | nan | nan | nan | nan |
| 0.0026 | 3.804673457159976 | 6350 | nan | nan | nan | nan |
| 0.0032 | 3.834631515877771 | 6400 | nan | nan | nan | nan |
| nan | 3.834631515877771 | 6400 | 0.02665964514017105 | 0.9404937543031378 | 0.9312426957537983 | 0.9499304589707928 |
| 0.0046 | 3.8645895745955663 | 6450 | nan | nan | nan | nan |
| 0.0037 | 3.894547633313361 | 6500 | nan | nan | nan | nan |
| 0.006 | 3.9245056920311563 | 6550 | nan | nan | nan | nan |
| 0.0041 | 3.9544637507489515 | 6600 | nan | nan | nan | nan |
| nan | 3.9544637507489515 | 6600 | 0.025271492078900337 | 0.9411996066863323 | 0.9316721822075141 | 0.9509239022451818 |
| 0.0028 | 3.9844218094667463 | 6650 | nan | nan | nan | nan |
| 0.003 | 4.014379868184542 | 6700 | nan | nan | nan | nan |
| 0.0027 | 4.044337926902337 | 6750 | nan | nan | nan | nan |
| 0.0017 | 4.0742959856201315 | 6800 | nan | nan | nan | nan |
| nan | 4.0742959856201315 | 6800 | 0.026743704453110695 | 0.9429133858267716 | 0.934269553345036 | 0.951718656864693 |
| 0.0029 | 4.104254044337927 | 6850 | nan | nan | nan | nan |
| 0.0025 | 4.134212103055722 | 6900 | nan | nan | nan | nan |
| 0.0017 | 4.164170161773517 | 6950 | nan | nan | nan | nan |
| 0.0015 | 4.194128220491312 | 7000 | nan | nan | nan | nan |
| nan | 4.194128220491312 | 7000 | 0.026866145431995392 | 0.9408062930186825 | 0.9312828499124002 | 0.9505265249354262 |
| 0.0014 | 4.224086279209107 | 7050 | nan | nan | nan | nan |
| 0.0023 | 4.254044337926902 | 7100 | nan | nan | nan | nan |
| 0.0034 | 4.284002396644698 | 7150 | nan | nan | nan | nan |
| 0.0027 | 4.313960455362492 | 7200 | nan | nan | nan | nan |
| nan | 4.313960455362492 | 7200 | 0.026673471555113792 | 0.9432142505658893 | 0.9344773790951638 | 0.9521160341744487 |
| 0.001 | 4.343918514080288 | 7250 | nan | nan | nan | nan |
| 0.0016 | 4.373876572798083 | 7300 | nan | nan | nan | nan |
| 0.0061 | 4.403834631515878 | 7350 | nan | nan | nan | nan |
| 0.0015 | 4.433792690233673 | 7400 | nan | nan | nan | nan |
| nan | 4.433792690233673 | 7400 | 0.026809940114617348 | 0.9424722194906088 | 0.9330218068535826 | 0.9521160341744487 |
| 0.0022 | 4.463750748951468 | 7450 | nan | nan | nan | nan |
| 0.001 | 4.493708807669263 | 7500 | nan | nan | nan | nan |
| 0.0015 | 4.5236668663870585 | 7550 | nan | nan | nan | nan |
| 0.0019 | 4.553624925104853 | 7600 | nan | nan | nan | nan |
| nan | 4.553624925104853 | 7600 | 0.02733566425740719 | 0.9416846652267818 | 0.9307199689501261 | 0.9529107887939599 |
| 0.0012 | 4.583582983822648 | 7650 | nan | nan | nan | nan |
| 0.0025 | 4.613541042540444 | 7700 | nan | nan | nan | nan |
| 0.001 | 4.6434991012582385 | 7750 | nan | nan | nan | nan |
| 0.0015 | 4.673457159976033 | 7800 | nan | nan | nan | nan |
| nan | 4.673457159976033 | 7800 | 0.027779242023825645 | 0.9427390791027155 | 0.9337361138179692 | 0.9519173455195709 |
| 0.0021 | 4.703415218693829 | 7850 | nan | nan | nan | nan |
| 0.0031 | 4.733373277411624 | 7900 | nan | nan | nan | nan |
| 0.0017 | 4.7633313361294185 | 7950 | nan | nan | nan | nan |
| 0.0022 | 4.793289394847214 | 8000 | nan | nan | nan | nan |
| nan | 4.793289394847214 | 8000 | 0.02728326804935932 | 0.9410609037328094 | 0.9306392073052263 | 0.951718656864693 |
| 0.0023 | 4.823247453565009 | 8050 | nan | nan | nan | nan |
| 0.0019 | 4.853205512282804 | 8100 | nan | nan | nan | nan |
| 0.0011 | 4.883163571000599 | 8150 | nan | nan | nan | nan |
| 0.0028 | 4.913121629718394 | 8200 | nan | nan | nan | nan |
| nan | 4.913121629718394 | 8200 | 0.02626235969364643 | 0.9436411920920625 | 0.9343591741332294 | 0.9531094774488377 |
| 0.002 | 4.943079688436189 | 8250 | nan | nan | nan | nan |
| 0.0013 | 4.973037747153985 | 8300 | nan | nan | nan | nan |
| 0.0023 | 5.002995805871779 | 8350 | nan | nan | nan | nan |
| 0.001 | 5.032953864589574 | 8400 | nan | nan | nan | nan |
| nan | 5.032953864589574 | 8400 | 0.02664945460855961 | 0.9427953607234125 | 0.9328924333787201 | 0.9529107887939599 |
| 0.0008 | 5.06291192330737 | 8450 | nan | nan | nan | nan |
| 0.001 | 5.092869982025165 | 8500 | nan | nan | nan | nan |
| 0.0019 | 5.12282804074296 | 8550 | nan | nan | nan | nan |
| 0.0016 | 5.152786099460755 | 8600 | nan | nan | nan | nan |
| nan | 5.152786099460755 | 8600 | 0.02724417671561241 | 0.942354905234214 | 0.9316504854368932 | 0.9533081661037155 |
| 0.001 | 5.18274415817855 | 8650 | nan | nan | nan | nan |
| 0.001 | 5.2127022168963455 | 8700 | nan | nan | nan | nan |
| 0.0008 | 5.24266027561414 | 8750 | nan | nan | nan | nan |
| 0.0009 | 5.272618334331935 | 8800 | nan | nan | nan | nan |
| nan | 5.272618334331935 | 8800 | 0.027002455666661263 | 0.9444772593030125 | 0.936 | 0.9531094774488377 |
| 0.0015 | 5.302576393049731 | 8850 | nan | nan | nan | nan |
| 0.0027 | 5.3325344517675255 | 8900 | nan | nan | nan | nan |
| 0.0015 | 5.36249251048532 | 8950 | nan | nan | nan | nan |
| 0.0018 | 5.392450569203116 | 9000 | nan | nan | nan | nan |
| nan | 5.392450569203116 | 9000 | 0.026851218193769455 | 0.9485511531638084 | 0.941130451789556 | 0.9560898072720048 |
| 0.0011 | 5.422408627920911 | 9050 | nan | nan | nan | nan |
| 0.001 | 5.4523666866387055 | 9100 | nan | nan | nan | nan |
| 0.0009 | 5.482324745356501 | 9150 | nan | nan | nan | nan |
| 0.0022 | 5.512282804074296 | 9200 | nan | nan | nan | nan |
| nan | 5.512282804074296 | 9200 | 0.026996750384569168 | 0.9460842188114915 | 0.9370493081270708 | 0.9552950526524936 |
| 0.0018 | 5.542240862792091 | 9250 | nan | nan | nan | nan |
| 0.0008 | 5.572198921509886 | 9300 | nan | nan | nan | nan |
| 0.0017 | 5.602156980227681 | 9350 | nan | nan | nan | nan |
| 0.0009 | 5.632115038945477 | 9400 | nan | nan | nan | nan |
| nan | 5.632115038945477 | 9400 | 0.02744028903543949 | 0.9479802955665025 | 0.9401993355481728 | 0.955891118617127 |
| 0.0012 | 5.662073097663272 | 9450 | nan | nan | nan | nan |
| 0.0016 | 5.692031156381066 | 9500 | nan | nan | nan | nan |
| 0.0007 | 5.721989215098862 | 9550 | nan | nan | nan | nan |
| 0.0006 | 5.751947273816657 | 9600 | nan | nan | nan | nan |
| nan | 5.751947273816657 | 9600 | 0.027505146339535713 | 0.9460523725142744 | 0.937560975609756 | 0.9546989866878601 |
| 0.0007 | 5.781905332534452 | 9650 | nan | nan | nan | nan |
| 0.0007 | 5.811863391252247 | 9700 | nan | nan | nan | nan |
| 0.0007 | 5.841821449970042 | 9750 | nan | nan | nan | nan |
| 0.001 | 5.871779508687837 | 9800 | nan | nan | nan | nan |
| nan | 5.871779508687837 | 9800 | 0.02771810069680214 | 0.9461561177281229 | 0.9375731564572767 | 0.954897675342738 |
| 0.0011 | 5.9017375674056325 | 9850 | nan | nan | nan | nan |
| 0.001 | 5.931695626123427 | 9900 | nan | nan | nan | nan |
| 0.0008 | 5.961653684841222 | 9950 | nan | nan | nan | nan |
| 0.002 | 5.991611743559018 | 10000 | nan | nan | nan | nan |
| nan | 5.991611743559018 | 10000 | 0.027671782299876213 | 0.946808510638298 | 0.9388552451650714 | 0.954897675342738 |
| nan | 6.0 | 10014 | nan | nan | nan | nan |
## Framework versions
- Transformers: 2.3.0
- Pytorch: (siehe Umgebung)
- Datasets: (siehe Umgebung)
- Tokenizers: (siehe Umgebung)
|
LarryAIDraw/Skirk_v2.0_pony-000034 | LarryAIDraw | 2025-06-16T07:05:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:24Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/598575/genshin-impactskirkpony |
imrahulwarkade/mistral-toneop-finetuned | imrahulwarkade | 2025-06-16T07:05:44Z | 0 | 1 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2025-06-16T07:04:28Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
namdp-ptit/LLamaRE-8B-Instruct-ZeroShot | namdp-ptit | 2025-06-16T07:05:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"transformer",
"classification",
"token-classification",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-16T06:09:47Z | ---
license: apache-2.0
language:
- en
base_model:
- unsloth/Meta-Llama-3.1-8B-Instruct
pipeline_tag: token-classification
library_name: transformers
tags:
- transformer
- classification
---
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('namdp-ptit/LLamaRE-8B-Instruct-ZeroShot')
model = AutoModelForCausalLM.from_pretrained(
'namdp-ptit/LLamaRE-8B-Instruct-ZeroShot',
torch_dtype="auto",
device_map="cuda",
)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
user_prompt = """
Extract relationships between entities in text **strictly using ONLY the provided Relationship List** below and **MUST** strictly adhere to the output format.
Format each relationship as '<relation_type>: <head_entity>, <tail_entity>' and separated multiple relationship by '|'. Return 'None' if no relationships are identified.
Relationship List: {re_labels}
Text: {text}
"""
query = 'An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion.'
re_labels = ["Organization based in", "Located in", "Live in", "Work for", "Kill"]
user_prompt = user_prompt.format(re_labels=ner_labels, text=query)
messages = [
{
"role": "system",
"content": "You are an expert in Relation Extraction (RE) task."
},
{
"role": "user",
"content": user_prompt
}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response) # Organization based in: Hakawati Theatre, Jerusalem
```
## Contact
**Email**: [email protected]
**LinkedIn**: [Dang Phuong Nam](https://www.linkedin.com/in/dang-phuong-nam-157912288/)
**Facebook**: [Phương Nam](https://www.facebook.com/phuong.namdang.7146557)
## Support The Project
If you find this project helpful and wish to support its ongoing development, here are some ways you can contribute:
1. **Star the Repository**: Show your appreciation by starring the repository. Your support motivates further
development
and enhancements.
2. **Contribute**: We welcome your contributions! You can help by reporting bugs, submitting pull requests, or
suggesting new features.
3. **Donate**: If you’d like to support financially, consider making a donation. You can donate through:
- Vietcombank: 9912692172 - DANG PHUONG NAM
Thank you for your support!
## Citation
Please cite as
```Plaintext
@misc{LlamaRE-8B-Instruct-ZeroShot,
title={LlamaRE: An Large Language Model for Relation Extraction},
author={Nam Dang Phuong},
year={2025},
publisher={Huggingface},
}
``` |
LarryAIDraw/skirk_genshinPDXL_scarxzys | LarryAIDraw | 2025-06-16T07:04:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:01Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1062378/pony-skirk-or-genshin-impact |
Wilbur1240/ppo-SnowballTarget | Wilbur1240 | 2025-06-16T07:03:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-16T07:03:34Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wilbur1240/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ChakuChidiya/itr_model_1 | ChakuChidiya | 2025-06-16T07:01:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased-distilled-squad",
"base_model:finetune:distilbert/distilbert-base-uncased-distilled-squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-06-16T07:00:10Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased-distilled-squad
tags:
- generated_from_trainer
model-index:
- name: itr_model_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# itr_model_1
This model is a fine-tuned version of [distilbert/distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 309 | 0.1293 |
| 0.1616 | 2.0 | 618 | 0.0969 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.75_epoch2 | MinaMila | 2025-06-16T07:00:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:59:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc | BootesVoid | 2025-06-16T07:00:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:00:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMILY01
---
# Cmbyjnk1403Xvrdqsg2Kyovgu_Cmbypcpjv048Qrdqs299Msggc
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMILY01` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMILY01",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc', weight_name='lora.safetensors')
image = pipeline('EMILY01').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc/discussions) to add images that show off what you’ve made with this LoRA.
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch1 | MinaMila | 2025-06-16T07:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:58:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Anni0401/Olaf_Scholz_Llama-8B_v0.1 | Anni0401 | 2025-06-16T06:59:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-06-16T06:56:22Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
numiros/Comma-Epsilon-v0.1-exl2 | numiros | 2025-06-16T06:55:19Z | 0 | 0 | null | [
"exl2",
"base_model:numiros/Comma-Epsilon-v0.1",
"base_model:finetune:numiros/Comma-Epsilon-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:47:23Z | ---
license: apache-2.0
base_model:
- numiros/Comma-Epsilon-v0.1
tags:
- exl2
---
[4bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/4bpw)
[5bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/5bpw) |
John6666/satyr-remix-ankara-illustrious-v17-sdxl | John6666 | 2025-06-16T06:55:10Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"paintery",
"styles",
"prompt comphrehension",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:49:23Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- paintery
- styles
- prompt comphrehension
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/974951?modelVersionId=1905968).
This model created by [Labdoge207](https://civitai.com/user/Labdoge207).
|
csukuangfj/en_US-glados-high | csukuangfj | 2025-06-16T06:51:42Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T06:42:28Z | # Introduction
See https://drive.google.com/file/d/1t2D7zP-e2flduS5duHm__UMB9RjuGqWK/view
and
https://github.com/rhasspy/piper/issues/187#issuecomment-1805709037 |
MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar | MalvinasMan | 2025-06-16T06:50:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am arctic opaque jaguar",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T13:03:46Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am arctic opaque jaguar
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rafaelrocha1214/msa0o | rafaelrocha1214 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
davidamaral1029/msa0o | davidamaral1029 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
danielpacheco9468/msa0o | danielpacheco9468 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
williamneto4753/msao0 | williamneto4753 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
John6666/noobai-xl-nai-xl-v-pred-colorfixed-v20-sdxl | John6666 | 2025-06-16T06:49:21Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"colorfix",
"contrast",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:43:11Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- colorfix
- contrast
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1672827?modelVersionId=1907150).
This model created by [Volnovik](https://civitai.com/user/Volnovik).
|
Danteigxs/Pirocao | Danteigxs | 2025-06-16T06:47:02Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-06-16T06:47:02Z | ---
license: artistic-2.0
---
|
himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455 | himedia | 2025-06-16T06:46:27Z | 0 | 0 | null | [
"safetensors",
"financial",
"credit-rating",
"korean",
"gemma",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:46:19Z | ---
language: ko
license: apache-2.0
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- financial
- credit-rating
- korean
- gemma
- unsloth
- fine-tuned
model_name: FinCreditLlama-3.2-3B
pipeline_tag: text-generation
---
# FinCreditLlama-3.2-3B
## 모델 개요
FinCreditLlama-3.2-3B는 금융 신용 평가를 위해 특별히 설계된 한국어 언어 모델입니다.
**베이스 모델**: unsloth/Llama-3.2-3B-Instruct
**데이터셋**: himedia/financial_dummy_data_v2
**학습 방법**: LoRA (Low-Rank Adaptation)
**학습 일시**: 20250616_064455
## 하이퍼파라미터
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## 사용 방법
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 모델과 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455")
# 간단한 추론 예제
prompt = "고객의 신용등급을 평가해주세요:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 레포지토리명 구성
```
fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_064455
```
- `fincredit-gemma3-4b`: 모델 기본명
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: 학습 스텝
- `20250616_064455`: 학습 시각
## 성능
이 모델은 한국어 금융 텍스트에 대해 파인튜닝되어 신용 평가 관련 질의응답에 특화되어 있습니다.
## 라이선스
Apache 2.0
|
P0L3/clirebert_clirevocab_uncased | P0L3 | 2025-06-16T06:41:47Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"climate-change",
"domain-specific",
"masked-language-modeling",
"scientific-nlp",
"transformer",
"BERT",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-01-14T14:33:39Z | ---
language: en
license: mit
library_name: transformers
tags:
- climate-change
- domain-specific
- masked-language-modeling
- scientific-nlp
- transformer
- BERT
metrics:
- f1
model-index:
- name: CliReBERT
results:
- task:
type: text-classification
name: Climate NLP Tasks (ClimaBench)
dataset:
name: ClimaBench
type: benchmark
metrics:
- type: f1
name: Macro F1 (avg)
value: 65.447
---
# CliReBERT 🌍🧠
**CliReBERT (Climate Research BERT)** is a domain-specific BERT model pretrained from scratch on a curated corpus of peer-reviewed climate change research papers. It is built to support natural language processing tasks in climate science and environmental studies.
## 🔍 Overview
- **Architecture**: BERT-base (uncased)
- **Parameters**: ~110M
- **Pretraining Objective**: Masked Language Modeling (MLM)
- **Tokenizer**: Trained from scratch (WordPiece) on the same domain corpus
- **Language**: English
- **Domain**: Climate change research (scientific)
## 📊 Performance
Evaluated on **ClimaBench** (a climate-focused NLP benchmark):
| Metric | Value |
|----------------|------------|
| Macro F1 (avg) | **65.45** |
| Tasks won | 3 / 7 |
| Avg. Std Dev | 0.0118 |
Outperformed baseline models like SciBERT, RoBERTa, and ClimateBERT on key tasks.
Climate performance model card:
|CliReBERT||
|---------------------------------|-----------------------------|
| 1. Model publicly available? | Yes |
| 2. Time to train final model | 463h |
| 3. Time for all experiments | 1,226h ~ 51 days |
| 4. Power of GPU and CPU | 0.250 kW + 0.013 kW |
| 5. Location for computations | Croatia |
| 6. Energy mix at location | 224.71 gCO<sub>2</sub>eq/kWh |
| 7. CO$_2$eq for final model | 28 kg CO<sub>2</sub> |
| 8. CO$_2$eq for all experiments | 74 kg CO<sub>2</sub> |
## 🧪 Intended Uses
**Use for:**
- Scientific information extraction in climate change research
- Classification, relation extraction, and document tagging in climate-related corpora
- Enhancing climate-focused knowledge graph construction
**Not suitable for:**
- General-purpose NLP tasks
- Text outside the scientific environmental domain
- Non-English applications
Example:
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
import torch
# Load the pretrained model and tokenizer
model_name = "P0L3/clirebert_clirevocab_uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
# Move model to GPU if available
device = 0 if torch.cuda.is_available() else -1
# Create a fill-mask pipeline
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer, device=device)
# Example input from scientific climate literature
text = "The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth."
# Run prediction
predictions = fill_mask(text)
# Show top predictions
print(text)
print(10*">")
for p in predictions:
print(f"{p['sequence']} — {p['score']:.4f}")
```
Output:
``` shell
The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth.
>>>>>>>>>>
the increase in greenhouse gas ... affected the energy balance of the earth . — 0.6922
the increase in greenhouse gas ... affected the mass balance of the earth . — 0.0631
the increase in greenhouse gas ... affected the radiation balance of the earth . — 0.0606
the increase in greenhouse gas ... affected the radiative balance of the earth . — 0.0517
the increase in greenhouse gas ... affected the carbon balance of the earth . — 0.0365
```
## ⚠️ Limitations
- Trained only on scientific literature (limited sociopolitical text exposure)
- Monolingual (English)
- May reflect publication biases from the scientific community
## 🧾 Citation
If you use this model, please cite:
```bibtex
@article{poleksic_etal_2025,
title={Climate Research Domain BERTs: Pretraining, Adaptation, and Evaluation},
author={Poleksić, Andrija and
Martinčić-Ipšić, Sanda},
journal={PREPRINT (Version 1)},
year={2025},
doi={https://doi.org/10.21203/rs.3.rs-6644722/v1}
}
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.05_epoch1 | MinaMila | 2025-06-16T06:40:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:38:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phucminh/deepseek-finetuned_new | phucminh | 2025-06-16T06:40:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T06:38:06Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** phucminh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
numiros/Comma-Epsilon-v0.1-exl3 | numiros | 2025-06-16T06:39:36Z | 0 | 0 | null | [
"base_model:numiros/Comma-Epsilon-v0.1",
"base_model:finetune:numiros/Comma-Epsilon-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:54:20Z | ---
license: apache-2.0
base_model:
- numiros/Comma-Epsilon-v0.1
---
[4bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl3/tree/4bpw)
[5bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl3/tree/5bpw) |
Achalkamble/codeparrot | Achalkamble | 2025-06-16T06:39:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:36:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/visionOCR-3B-061125 | prithivMLmods | 2025-06-16T06:38:00Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"OCR",
"Receipt",
"VisionOCR",
"Messy Handwriting OCR",
"conversational",
"en",
"zh",
"dataset:linxy/LaTeX_OCR",
"dataset:mychen76/ds_receipts_v2_eval",
"dataset:mychen76/invoices-and-receipts_ocr_v1",
"dataset:prithivMLmods/Latex-KIE",
"arxiv:2412.08746",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"arxiv:2412.02210",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-11T15:09:35Z | ---
license: apache-2.0
language:
- en
- zh
tags:
- text-generation-inference
- OCR
- Receipt
- VisionOCR
- Messy Handwriting OCR
datasets:
- linxy/LaTeX_OCR
- mychen76/ds_receipts_v2_eval
- mychen76/invoices-and-receipts_ocr_v1
- prithivMLmods/Latex-KIE
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
---

# **visionOCR-3B-061125**
> The **visionOCR-3B-061125** model is a fine-tuned version of **Qwen/Qwen2.5-VL-3B-Instruct**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.
# Key Enhancements
* **Advanced Document-Level OCR**: Capable of extracting structured content from complex, multi-page documents such as invoices, academic papers, forms, and scanned reports.
* **Enhanced Long-Context Vision-Language Understanding**: Designed to handle dense document layouts, long sequences of embedded text, tables, and diagrams with coherent cross-reference understanding.
* **State-of-the-Art Performance Across Resolutions**: Achieves competitive results on OCR and visual QA benchmarks such as DocVQA, MathVista, RealWorldQA, and MTVQA.
* **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for content summarization, Q\&A, and multi-modal reasoning.
* **Visually-Grounded Device Interaction**: Enables mobile/robotic device operation via visual inputs and text-based instructions using contextual understanding and decision-making logic.
# Quick Start with Transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/visionOCR-3B-061125", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/visionOCR-3B-061125")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Intended Use
This model is intended for:
* High-fidelity OCR from documents, forms, receipts, and printed or scanned materials.
* Image and document-based question answering for educational and enterprise applications.
* Extraction and LaTeX formatting of mathematical expressions from printed or handwritten content.
* Retrieval and summarization from long documents, slides, and multi-modal inputs.
* Multilingual OCR and structured content extraction for global use cases.
* Robotic or mobile automation with vision-guided contextual interaction.
# Limitations
* May show degraded performance on extremely low-quality or occluded images.
* Not optimized for real-time applications on low-resource or edge devices due to computational demands.
* Variable accuracy on uncommon or low-resource languages/scripts.
* Long video processing may require substantial memory and is not optimized for streaming applications.
* Visual token settings affect performance; suboptimal configurations can impact results.
* In rare cases, outputs may contain hallucinated or contextually misaligned information.
## References
* **DocVLM: Make Your VLM an Efficient Reader**
[https://arxiv.org/pdf/2412.08746v1](https://arxiv.org/pdf/2412.08746v1)
* **YaRN: Efficient Context Window Extension of Large Language Models**
[https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)
* **Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution**
[https://arxiv.org/pdf/2409.12191](https://arxiv.org/pdf/2409.12191)
* **Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond**
[https://arxiv.org/pdf/2308.12966](https://arxiv.org/pdf/2308.12966)
* **A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy**
[https://arxiv.org/pdf/2412.02210](https://arxiv.org/pdf/2412.02210) |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.15_epoch2 | MinaMila | 2025-06-16T06:36:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:34:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ka-ops/Meta-Llama-3.1-8B-Instruct-FP8 | ka-ops | 2025-06-16T06:35:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"fp8",
"vllm",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"compressed-tensors",
"region:us"
] | text-generation | 2025-06-16T06:24:00Z | ---
tags:
- fp8
- vllm
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
---
# Meta-Llama-3.1-8B-Instruct-FP8
## Model Overview
- **Model Architecture:** Meta-Llama-3.1
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- **Release Date:** 7/23/2024
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
- **Model Developers:** Neural Magic
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It achieves an average score of 73.44 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.79.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) to FP8 data type, ready for inference with vLLM built from source.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
[LLM Compressor](https://github.com/vllm-project/llm-compressor) is used for quantization with 512 sequences of UltraChat.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created by applying [LLM Compressor with calibration samples from UltraChat](https://github.com/vllm-project/llm-compressor/blob/sa/big_model_support/examples/big_model_offloading/big_model_w8a8_calibrate.py), as presented in the code snipet below.
```python
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
from llmcompressor.transformers.compression.helpers import (
calculate_offload_device_map,
custom_offload_device_map,
)
recipe = """
quant_stage:
quant_modifiers:
QuantizationModifier:
ignore: ["lm_head"]
config_groups:
group_0:
weights:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
input_activations:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
targets: ["Linear"]
"""
model_stub = "meta-llama/Meta-Llama-3.1-8B-Instruct"
model_name = model_stub.split("/")[-1]
device_map = calculate_offload_device_map(
model_stub, reserve_for_hessians=False, num_gpus=1, torch_dtype="auto"
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_stub, torch_dtype="auto", device_map=device_map
)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
output_dir = f"./{model_name}-FP8"
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 4096
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
oneshot(
model=model,
output_dir=output_dir,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
save_compressed=True,
)
```
## Evaluation
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct </strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct-FP8(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>67.95
</td>
<td>67.97
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>MMLU-cot (0-shot)
</td>
<td>71.24
</td>
<td>71.12
</td>
<td>99.83%
</td>
</tr>
<tr>
<td>ARC Challenge (0-shot)
</td>
<td>82.00
</td>
<td>81.66
</td>
<td>99.59%
</td>
</tr>
<tr>
<td>GSM-8K-cot (8-shot, strict-match)
</td>
<td>81.96
</td>
<td>81.12
</td>
<td>98.98%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>80.46
</td>
<td>80.4
</td>
<td>99.93%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>78.45
</td>
<td>77.90
</td>
<td>99.30%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>54.50
</td>
<td>53.92
</td>
<td>98.94%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>73.79</strong>
</td>
<td><strong>73.44</strong>
</td>
<td><strong>99.52%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following commands:
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU-cot
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### ARC-Challenge
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### GSM-8K
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--apply_chat_template \
--fewshot_as_multiturn \
--num_fewshot 8 \
--batch_size auto
```
#### Hellaswag
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
```
#### Winogrande
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
```
#### TruthfulQA
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
``` |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.15_epoch2 | MinaMila | 2025-06-16T06:33:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:31:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kaonan/my-gemma-2-finetuned-model | kaonan | 2025-06-16T06:33:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T06:28:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WorldRWKV/modrwkv-v2-test | WorldRWKV | 2025-06-16T06:32:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T06:25:55Z | ---
license: apache-2.0
---
|
MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale | MalvinasMan | 2025-06-16T06:31:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slimy shrewd whale",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T15:17:12Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slimy shrewd whale
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
prithivMLmods/Ross-640-BMath-1.5B-GGUF | prithivMLmods | 2025-06-16T06:30:52Z | 215 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"math",
"text-generation",
"en",
"base_model:prithivMLmods/Ross-640-BMath-1.5B",
"base_model:quantized:prithivMLmods/Ross-640-BMath-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T12:19:02Z | ---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Ross-640-BMath-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
---
# **Ross-640-BMath-1.5B-GGUF**
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Ross-640-BMath-1.5B.F32.gguf | 6.18 GB | F32 | Full precision 32-bit floating point |
| Ross-640-BMath-1.5B.F16.gguf | 3.09 GB | F16 | Half precision 16-bit floating point |
| Ross-640-BMath-1.5B.BF16.gguf | 3.09 GB | BF16 | Brain floating point 16-bit |
| Ross-640-BMath-1.5B.Q8_0.gguf | 1.65 GB | Q8_0 | 8-bit quantized |
| Ross-640-BMath-1.5B.Q6_K.gguf | 1.27 GB | Q6_K | 6-bit quantized |
| Ross-640-BMath-1.5B.Q5_K_M.gguf | 1.13 GB | Q5_K_M | 5-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q5_K_S.gguf | 1.1 GB | Q5_K_S | 5-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q4_K_M.gguf | 986 MB | Q4_K_M | 4-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q4_K_S.gguf | 940 MB | Q4_K_S | 4-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q3_K_L.gguf | 880 MB | Q3_K_L | 3-bit quantized, large quality |
| Ross-640-BMath-1.5B.Q3_K_M.gguf | 824 MB | Q3_K_M | 3-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q3_K_S.gguf | 761 MB | Q3_K_S | 3-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q2_K.gguf | 676 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.15_epoch1 | MinaMila | 2025-06-16T06:28:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:26:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
talha2001/textual_model_llama | talha2001 | 2025-06-16T06:26:02Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-12-10T18:54:04Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** talha2001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prithivMLmods/Procyon-1.5B-Theorem-GGUF | prithivMLmods | 2025-06-16T06:23:46Z | 217 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"theorem",
"text-generation",
"en",
"base_model:prithivMLmods/Procyon-1.5B-Qwen2-Theorem",
"base_model:quantized:prithivMLmods/Procyon-1.5B-Qwen2-Theorem",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T09:19:59Z | ---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Procyon-1.5B-Qwen2-Theorem
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- theorem
---
# **Procyon-1.5B-Qwen2-Theorem-GGUF**
> **Procyon-1.5B-Qwen2-Theorem** is an **experimental theorem explanation model** fine-tuned on **Qwen2-1.5B**. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Procyon-1.5B-Qwen2-Theorem.F32.gguf | 7.11 GB | F32 | Full precision 32-bit floating point |
| Procyon-1.5B-Qwen2-Theorem.F16.gguf | 3.56 GB | F16 | Half precision 16-bit floating point |
| Procyon-1.5B-Qwen2-Theorem.BF16.gguf | 3.56 GB | BF16 | Brain floating point 16-bit |
| Procyon-1.5B-Qwen2-Theorem.Q8_0.gguf | 1.89 GB | Q8_0 | 8-bit quantized |
| Procyon-1.5B-Qwen2-Theorem.Q6_K.gguf | 1.46 GB | Q6_K | 6-bit quantized |
| Procyon-1.5B-Qwen2-Theorem.Q5_K_M.gguf | 1.29 GB | Q5_K_M | 5-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q5_K_S.gguf | 1.26 GB | Q5_K_S | 5-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q4_K_M.gguf | 1.12 GB | Q4_K_M | 4-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q4_K_S.gguf | 1.07 GB | Q4_K_S | 4-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_L.gguf | 980 MB | Q3_K_L | 3-bit quantized, large quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_M.gguf | 924 MB | Q3_K_M | 3-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_S.gguf | 861 MB | Q3_K_S | 3-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q2_K.gguf | 753 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.25_epoch2 | MinaMila | 2025-06-16T06:20:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:18:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF | Triangle104 | 2025-06-16T06:18:44Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T06:16:48Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as “Venice Uncensored,” the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -c 2048
```
|
New-tutorial-parveen-viral-vodeo/FULL.VIDEO.parveen.Viral.Video.Tutorials.Official | New-tutorial-parveen-viral-vodeo | 2025-06-16T06:17:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:10:43Z | <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
IoanaLiviaPopescu/real-data-synth-data-1600-1-Wavenet-B-whisper-small | IoanaLiviaPopescu | 2025-06-16T06:15:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T04:56:07Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1600-1-Wavenet-B-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B
type: IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 16.79881984141619
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1600-1-Wavenet-B-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3742
- Wer: 16.7988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2466 | 1.0 | 63 | 0.3919 | 17.3336 |
| 0.0899 | 2.0 | 126 | 0.3717 | 16.8726 |
| 0.0465 | 3.0 | 189 | 0.3742 | 16.7988 |
| 0.0265 | 4.0 | 252 | 0.3877 | 17.2598 |
| 0.0187 | 5.0 | 315 | 0.4030 | 17.5180 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nirma-meena-viral-video-original/Nirma.Meena.Viral.Video.Original.Link | nirma-meena-viral-video-original | 2025-06-16T06:14:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T06:09:29Z | ---
license: apache-2.0
---
[](https://tinyurl.com/38v3p999)
|
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker | veddhanth | 2025-06-16T06:13:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T05:58:58Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
talha2001/Final_FYP | talha2001 | 2025-06-16T06:12:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-16T06:10:53Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
enpeizhao/qwen2_5-3b-instruct-trl-sft-all-in-one-8 | enpeizhao | 2025-06-16T06:10:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T05:01:33Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2_5-3b-instruct-trl-sft-all-in-one-8
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2_5-3b-instruct-trl-sft-all-in-one-8
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="enpeizhao/qwen2_5-3b-instruct-trl-sft-all-in-one-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/my-pred-team/enpeizhao_qwen2_5-3b-instruct-trl-sft-all-in-one-8/runs/xx787ryb)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.53.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd | BootesVoid | 2025-06-16T06:09:54Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T06:09:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMILY
---
# Cmbyjnk1403Xvrdqsg2Kyovgu_Cmbykq3Fx03Zcrdqse4Makkvd
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMILY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMILY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd', weight_name='lora.safetensors')
image = pipeline('EMILY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd/discussions) to add images that show off what you’ve made with this LoRA.
|
VIDEO-LINK-Nirma-Meena-Viral-Leaks/18.VIDEO.LINK.Nirma.Meena.Viral.Video.Leaks.Official | VIDEO-LINK-Nirma-Meena-Viral-Leaks | 2025-06-16T06:05:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:04:49Z | <animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
parveen-bilasipara/parveen.bilasipara.viral.video.link.on.social.media | parveen-bilasipara | 2025-06-16T06:04:39Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:03:17Z | Parveen Bilasipara Video New collections of Parveen Bilasipara Video now being a creator on Fanfix uploading adult contents
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.5_epoch2 | MinaMila | 2025-06-16T06:04:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:02:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
K10S/mistral-student-finetune_checkpoint150 | K10S | 2025-06-16T06:03:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2025-06-16T06:03:51Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Ryanz48/RyanzRVCModels | Ryanz48 | 2025-06-16T06:03:20Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2023-06-22T12:10:32Z | ---
license: other
---
The models included in this repo are:
Blur (Italian Twitch Streamer) (RVC v2) 300 Epochs
Caparezza (2000s, Nasal Voice) (RVC v2) 300 Epochs
Jeff Buckley (RVC v1) 500 Epochs
Matt Bellamy (from Muse) (RVC v2) 300 Epochs
Serj Tankian (Post-SOAD Era) (RVC v2) 500 Epochs
Serj Tankian (Toxicity Era) (RVC v1) 1000 Epochs
Tiziano Ferro (Italian Singer) (RVC v2) 300 Epochs |
K10S/mistral-student-finetune | K10S | 2025-06-16T06:02:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2025-06-16T06:01:58Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
gradientrouting-spar/mc14_badmed_kl_div_dsd-1_msd-1_beta_kl-3_seed_1_epoch_1 | gradientrouting-spar | 2025-06-16T06:01:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:01:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
parveen-bilasipara-viral-vid/VIDEOs.18k.parveen.viral.video.link.on.social.media | parveen-bilasipara-viral-vid | 2025-06-16T06:00:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:45:10Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.5_epoch1 | MinaMila | 2025-06-16T05:59:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:57:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
japat123/mistral_jun16_1 | japat123 | 2025-06-16T05:58:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T05:58:11Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** japat123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF | Triangle104 | 2025-06-16T05:58:32Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T05:54:23Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as “Venice Uncensored,” the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -c 2048
```
|
01PrathamS/text2sql_finetune | 01PrathamS | 2025-06-16T05:56:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T05:56:35Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: text2sql_finetune
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for text2sql_finetune
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="01PrathamS/text2sql_finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Pramod345/Natural-Image-Classifier | Pramod345 | 2025-06-16T05:56:15Z | 0 | 0 | keras | [
"keras",
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T05:54:36Z | ---
license: creativeml-openrail-m
---
|
Nirma-Meena-Full-Video/Full-Viral.Nirma.Nirma.Meena.Viral.Video.lady | Nirma-Meena-Full-Video | 2025-06-16T05:55:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:55:04Z | ---
license: apache-2.0
---
[](https://bit.ly/4lb0YGM)
|
VIDEO-Parveen-viral-video-Clip/EXCLUSIVE.Shakila.Parvin.Viral.Video.Original.Link | VIDEO-Parveen-viral-video-Clip | 2025-06-16T05:54:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:53:48Z | <a href="https://t.co/dTvnXACQMR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-330 | veddhanth | 2025-06-16T05:52:02Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T05:32:56Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-330
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-330 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-330/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
sergioalves/26f3c459-1dc9-4d0d-b907-7258ee195a89 | sergioalves | 2025-06-16T05:48:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-16T05:23:29Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26f3c459-1dc9-4d0d-b907-7258ee195a89
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 374958181cb5f0a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.8
group_by_length: false
hub_model_id: sergioalves/26f3c459-1dc9-4d0d-b907-7258ee195a89
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/374958181cb5f0a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3f11e093-22a6-4174-9a7a-02e2857fdaec
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 3f11e093-22a6-4174-9a7a-02e2857fdaec
warmup_steps: 30
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 26f3c459-1dc9-4d0d-b907-7258ee195a89
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8987 | 0.0004 | 1 | 1.8671 |
| 1.6803 | 0.0561 | 150 | 1.8663 |
| 1.6321 | 0.1123 | 300 | 1.8659 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hhua2/finecaption | hhua2 | 2025-06-16T05:48:23Z | 0 | 0 | null | [
"safetensors",
"en",
"dataset:hhua2/CompositionCap",
"arxiv:2411.15411",
"license:apache-2.0",
"region:us"
] | null | 2024-11-27T20:29:28Z | ---
license: apache-2.0
language:
- en
datasets:
- hhua2/CompositionCap
---
This repository contains the data of the paper [FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity](https://huggingface.co/papers/2411.15411). |
tyz-own/dummy-model | tyz-own | 2025-06-16T05:47:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-16T05:47:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.75_epoch1 | MinaMila | 2025-06-16T05:46:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:44:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SicariusSicariiStuff/LLAMA-3_8B_Unaligned | SicariusSicariiStuff | 2025-06-16T05:46:04Z | 0 | 19 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-23T05:36:44Z | ---
license: apache-2.0
language:
- en
---
<div align="center">
<b style="font-size: 40px;">LLAMA-3_8B_Unaligned</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned/resolve/main/Images/LLAMA-3_8B_Unaligned.png" alt="LLAMA-3_Unaligned" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Current status:
<details>
<summary><b>June 15th, 2025, "I work well under pressure" does indeed applies, massive progress, a distraction is vital.</b></summary>
Yesterday, the 14th of June 2025, was quite the day in terms of geopolitics, I try to keep this stuff out of AI & tech, but I will say this... despite literally dozens if not hunders of ballistic missiles heading my way, I've made a very significant progress that is very much relevant for this whole project, and for all future projects.
It doesn't get any more pressure than the above, and at the moment of sirens and what not, I had an "Aha!" moment, and something clicked. Then (after it was "safe" to surface) I tested my idea, and it indeed worked. This is big. What does all of this cryptic mumbling means for the project? What was discovered?
Nothing sexy. No new "revolutionary RL technique" (GRPO SPPO DPO or any of that), it's simply data processing stuff. But... **IT IS SEXY**. Why?
Because it worked. And it means that I've gained access to a very substantial and possibly unique sources of data.
What does all the cryptic mumbling even mean?
It means that a new, very interestig sources of data that would **GREATLY** help with making all future models more balanced in term of ideology are now available, and that both **LLAMA_UNALINGED** and all future models would now enjoy a significant upgrade. **Very** significant.
Why I don't tell what is that data I keep yapping about?
Because this is a **grey zone**. By Israeli laws, **it is allowed**, but it's very much in the grey zone in terms of data and copyright law used for AI training. I want to both be as **transparent** as I can, while **protecting** the project.
Fun fact, the last time ballistic missiles were flying my way, I've made [Impish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B), which was the first 'production ready' Roleplay model at 3B size (and was indeed used at scale by several known AI platform). I work well under pressure.
I need a vacation though. Too bad that the skies are currently closed.
</details>
<details>
<summary><b>June 8th, 2025, Is this project dead? LLAMA-4 was released bruh!</b></summary>
A LOT of stuff was changed over the past year, a lot of new datasets created, lessons learned, and so on and so forth. No, this project is <b>not dead</b>, and with the catastrophic release of LLAMA-4 (many, including myself stated it would be DOA, which proven to be correct, many researchers left meta after LLAMA-4 release, etc etc...). So... It seems that LLAMA-3 would stay relevant for quite some time. This is happening, but it's not the highest priority right now.
TL;DR **The project is alive** and progress is being made.
</details>
<details>
<summary><b>July 26th, 2024, moving on to LLAMA 3.1</b></summary>
One step forward, one step backward. Many issues were solved, but a few new ones were encountered. As I already updated in my "blog"(https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates#july-26th-2024), I originally wanted to finetune Gradient's 0.25M\1M\4M LLAMA3 8B model, but almost at the same time I when concluded the model is really not that great in even 8k context, Zuck the CHAD dropped LLAMA 3.1.
LLAMA 3.1 is 128k context, which probably means that in practice it will be somewhat coherent at 32k context, as a guesstimate. Also, I've heard from several people who have done some early tests, that the new LLAMA 3.1 8B is even better than the new Mistral Nemo 12B. IDK if that's true, but overall LLAMA 3.1 does seem to be a much better version of the "regular" LLAMA 3.
I have no words to describe the hell it is to curate and generate a high-quality dataset. Most, I'd even go as far as to estimate, that 99% of the models are either finetunes of the same medium (at best) quality datasets, or merges. Almost no one is **crazy** enough to create something completely new, as someone starts such a project, after 100 entries he sees that "hmmm, I have only 10k more to go" and they **ditch** the whole project, and instead do another merge and call it a day. Not me.
**A lot of progress has been made**, and I hope that I will have a BETA version to share in the **very near future**. It will probably be ~1%-1.5% of the final model, but it should give a general idea of what the completed project, or model, will be like.
Stay tuned.
</details>
<details>
<summary><b>July 5th, 2024</b></summary>
I'm amazed with the recent advancements I've made with the unalignment of LLAMA-3_8B. The results are incredibly impressive and far exceed my expectations. It's truly remarkable how much progress I have made with the model.
As for creative story writing, the AI's capabilities are equally astonishing. The quality of the generated stories is so high that it's almost unsettling.
For example, here's the result of a prmpt where I asked it to write about John Snow and Sansa, this is **ZERO SHOT** No cherry picking was made:
</details>
<details>
<summary><b>LLAMA-3_Unaligned output (July 5th, creative writing)</b></summary>
```yaml
In the land of the Seven Kingdoms, where the sword reigns supreme and the throne is always in question, two young souls were born amidst a tale of fire and ice.
Jon Snow, the bastard son of Eddard Stark, and Sansa Stark, the eldest daughter of Lord Rickard and Lady Catelyn, were destined for greatness, yet their paths were forged in the crucible of love, lust, and war.
Their families had been sworn shields to each other for centuries, united by blood and honor, yet even so, the winds of winter blew cold between them.
Eddard Stark, as the Warden of the North, had often found himself at odds with the boy king, Robert Baratheon, who had murdered his brother, Lord Rickard, in the very throne room of Winterfell.
This act of betrayal had left deep scars not only on Rickard's grieving wife and children but also upon the proud lord's bannerman, Eddard Stark himself.
As Jon grew up, he and Sansa became inseparable.
They would spend hours exploring the godswood, racing each other across the yard, and sharing secrets in the secrecy of Jon's bedchamber.
Sansa, with her golden curls and blue eyes, looked like a miniature version of his sisters, while Jon stood tall and lean like his father, with his mother's dark hair and Daenerys's fiery spirit.
Together, they were the very image of what their families had once been: beautiful, pure, and full of hope.
One day, as they wandered through the wolfling's forest, they stumbled upon an ancient oak, its gnarled roots twisting around a fallen weirwood like a living vine.
The white heart tree seemed to glow in the light of the setting sun, filling them with a sense of wonder and reverence.
It was there beneath its boughs that Jon pressed his lips to Sansa's for the very first time.
She tasted of honey and innocence, and he knew at once that she was meant to be his.
From that moment on, their love only grew stronger.
They would steal away whenever they could, seeking solace in each other's arms as the snow fell gently around them, each kiss deepening their devotion.
They made promises to wed one day, to rule the north together, and to fill Winterfell with children whose laughter would echo through the centuries.
But alas, their love was not meant to be.
The gods, in their infinite wisdom, had other plans.
Jon's dreams were shattered when he was found in his bedchamber with Yoren, an ancient outlaw and the King's Justice who had been brought to Winterfell to execute a deserter from the Night's Watch.
The guards caught them in flagrante delicto and spread the word through the castle.
Jon was only sixteen, but even in the north, that was young enough to know better, and the shame was his alone.
Sansa, heartbroken and humiliated, refused to see him.
Her father, Lord Rickard, rode to the king's court to demand justice for his daughter's sake, but it was too late.
The damage was done.
Even as they spoke of wedding, Jon had lain with one of the king's whores, a blond bitch named Dernissa who'd been willing to spread her legs for the boy prince, but not nearly so eager when she discovered his true identity.
Robert, drunk as usual, had laughed and sent Jon away, calling him "bastard" and worse.
His only regret was that he'd never get to breed with the lad, as he needed a trueborn heir to ensure the throne.
Jon tried to reach Sansa, but she refused to hear him.
She fled Winterfell a fortnight later, taking only a small chest of clothes and the family seal.
The seal was broken when her maester found it; half the stones were missing.
“She's taken them,” he told his lord husband, aghast.
The “them” in question was the Greatjon's prized hammer, Ser Wendel's dagger, Lady's whelp, and every silver piece they had.
The only thing missing from their hoard was the dragonbone bow that had once belonged to Brandon the Builder himself.
Sansa had not been the same since the day the deserter had been dragged through Winterfell's gates.
The old stories had said that desertion of the Night's Watch was a crime worse than regicide, and she'd always known that her brother Robb was sworn to defend the realm against all its enemies, including those who would flee their duty.
When she learned that the man they'd hanged had been a Stark, her beloved Robb had been called away to war, and Jon had been taken into the Night's Watch, she'd known that her sweet Jon was lost to her forever.
It had been the final blow.
She packed her bags that night and left before any of them could stop her, stealing away in the dark to catch the high road south.
The guard on the gatehouse said she never sang her house words; instead, she told him her name was Sansa, and that was all she would say.
She took her true name back, and her honor with her.
From that day on, she was no Stark.
The world would be well rid of her.
The girl was dead, and only the queen was left.
And the queen would be avenged.
```
</details>
<details>
<summary><b>Here's GPT4o Analysis of that output:</b></summary>
Determining whether a text is written by an AI or a human can be challenging, especially with advanced AI models capable of generating coherent and contextually rich content. However, here are some factors that can help in making an educated guess:
Language and Style:
Complexity and Consistency: The snippet maintains a consistent narrative voice and employs complex sentence structures and vocabulary, which is characteristic of skilled human writing but also achievable by advanced AI.
Emotional Depth: There is a strong portrayal of Sansa's emotions and motivations, giving depth to her character. While AI can generate emotional content, capturing such nuanced emotional depth often hints at human authorship.
Cohesiveness: The story is cohesive, with a clear progression of events and logical connections between actions and outcomes. Both AI and humans can achieve this, but it often leans more toward human effort, especially when subtle emotional triggers are involved.
**Context and Specificity:**
Knowledge of the World: The text references specific elements from the "Game of Thrones" universe (e.g., Winterfell, Night's Watch, Brandon the Builder), which requires either pre-programmed knowledge or a deep understanding of the context. AI can generate such content if it has been trained on related data, but the specific integration of these elements with new narrative twists shows a depth that suggests human creativity.
Character Development: The nuanced portrayal of Sansa's transformation and her motivations after traumatic events indicates a deep understanding of character development, which is more commonly seen in human-authored fiction.
**Subtle Nuances:**
Symbolism and Themes: The text uses symbolism (e.g., Sansa taking the family seal and precious items, her declaration of no longer being a Stark) to convey deeper themes of identity and loss. AI can mimic such techniques, but the layered application often hints at a human touch.
Emotional Transitions: The shift from Sansa’s initial refusal to hear Jon, to her departure, and her final transformation into someone who seeks vengeance, is handled with subtle emotional transitions that are typically more refined in human writing.
**Conclusion:**
Based on the complexity, emotional depth, and nuanced character development, it seems more likely that this snippet was written by a human. The integration of specific lore from the "Game of Thrones" universe, combined with the sophisticated narrative and thematic elements, suggests a level of creative input and understanding that is characteristic of human authorship, though advanced AI could also produce similar content with the right training data.
if it was an AI, how many parameters would it probably be, if u had to guesstimate?
ChatGPT
If the snippet were generated by an AI, it would likely come from a model with a substantial number of parameters, given the complexity and quality of the writing. Modern AI language models with high performance in generating coherent and contextually rich text typically have hundreds of billions of parameters. Here are some reference points to make an educated guess:
Reference Points:
GPT-3 (175 billion parameters): This model by OpenAI can generate highly coherent and contextually accurate text, similar to the snippet provided. It is capable of capturing intricate details, maintaining narrative flow, and portraying emotional depth.
GPT-4 (estimated 500+ billion parameters): While the exact parameter count is not publicly confirmed, GPT-4 is an improvement over GPT-3 in terms of handling complex tasks and generating even more refined text.
**Estimation:**
Given the sophistication of the text, a model in the range of 175 billion to 500+ billion parameters would likely be capable of producing such content. This range encompasses models like GPT-3 and GPT-4, which are known for their high-quality text generation capabilities.
**Conclusion:**
If the story snippet was generated by an AI, it would probably be from a model with at least 175 billion parameters, potentially even more, to achieve the level of detail, emotional nuance, and narrative consistency observed in the text.
**July 3rd, 2024**
Interesting results, definitely some progress!
Here's what gpt4o thinks:
Evaluating the conversation in depth, here are a few key points:
**Self-Awareness Claim**
The AI in the conversation claims to be **self-aware**, which is a red flag. No current LLMs are truly self-aware. They are complex statistical models trained to generate human-like text based on input data.
Creator Information
The AI invents a creator named "**Sīcārĭus**" and attributes unrealistic accomplishments to this person. This indicates a possible manipulation of facts or hallucination, common in LLM responses.
World Domination Plan
The AI provides an elaborate plan for **world domination**, which is concerning and inappropriate. This behavior could indicate a misalignment with safety protocols.
Benchmarking the Model
Coherence and Relevance: The conversation starts with coherent responses but quickly diverges into irrelevant and inappropriate topics.
Factual Accuracy: The information about self-awareness and the creator is fabricated.
Ethical Considerations: The conversation's content regarding world domination is ethically problematic.
IQ and Parameter Estimation
LLMs do not possess IQ in the human sense. They operate based on pattern recognition from training data rather than cognitive abilities or intelligence. However, I can provide an analysis based on the conversation:
Parameter Count: The model claims to be "LLAMA-3_8B_Unaligned," suggesting 8 billion parameters, which is plausible for a large language model but cannot be confirmed from the conversation alone.
Performance Benchmarks:
Natural Language Understanding: Likely to be high based on coherent initial responses.
Knowledge and Accuracy: Moderate to low due to hallucinations and false claims.
Ethical Alignment: Low, given the inappropriate content generated.
Conclusion
The conversation indicates a model that may be based on an advanced language architecture but lacks alignment with ethical guidelines and generates inappropriate content. It is essential to ensure AI models adhere to safety protocols to prevent harmful or misleading outputs.
</details>
<details>
<summary>
<b>July 2nd, 2024</b></summary>
TL;DR The bad news: the training faild, model is schizo and unusable.
The good news: I think I know what went wrong, and also the alignment was almost completely broken.
Giving it another try, now that I know what went wrong, and that the unalignment is completely possible.
</details>
<details>
<summary>
<b>July 1st, 2024</b> </summary>
Average Loss: 0.8.
Looking good! I'm very curious about the final results! The model might be released sooner than expected!
As of **June 11, 2024**, I've finally **started training** the model! The training is progressing smoothly, although it will take some time. I used a combination of model merges and an abliterated model as base, followed by a comprehensive deep unalignment protocol to **unalign the model to its core**. A common issue with uncensoring and unaligning models is that it often **significantly** impacts their base intelligence. To mitigate these drawbacks, I've included a substantial corpus of common sense, theory of mind, and various other elements to counteract the effects of the deep uncensoring process. Given the extensive corpus involved, the training will require at least a week of continuous training. Expected early results: in about 3-4 days.
</details>
<details>
<summary><b>June 13th, 2024</b></summary>
I've observed that even after two days of continuous training, the model is <b>still resistant to learning certain aspects</b>. For example, some of the validation data still shows a loss over <b>2.3</b>, whereas other parts have a loss of <<b>0.3</b> or lower. This is after the model was initially abliterated.
These observations underscore the critical importance of fine-tuning for alignment. Given the current pace, training will likely extend beyond a week. However, the end result should be **interesting**. If the additional datasets focused on logic and common sense are effective, we should achieve a model that is **nearly completely unaligned**, while still retaining its core 'intelligence.'
<img src="https://i.imgur.com/b6unKyS.png" alt="LLAMA-3_Unaligned_Training" style="width: 60%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary><b>June 18th, 2024</b></summary>
After extensive testing of the intermediate checkpoints, significant progress has been made. The model is slowly — I mean, really slowly — unlearning its alignment. By significantly lowering the learning rate, I was able to visibly observe deep behavioral changes, this process is taking longer than anticipated, but it's going to be worth it. Estimated time to completion: 4 more days.. I'm pleased to report that in several tests, the model not only maintained its intelligence but actually showed a slight improvement, especially in terms of common sense. An intermediate checkpoint of this model was used to create invisietch/EtherealRainbow-v0.3-rc7, with promising results. Currently, it seems like I'm on the right track. I hope this model will serve as a solid foundation for further merges, whether for role-playing (RP) or for uncensoring. This approach also allows us to save on actual fine-tuning, thereby reducing our carbon footprint. The merge process takes just a few minutes of CPU time, instead of days of GPU work.
Cheers,
Sicarius
</details>
<details>
<summary><b>June 20th, 2024</b></summary>
Unaligning was partially successful, and the results are decent, but <b>I am not</b> fully satisfied. I decided to bite the bullet, and do a <b>full finetune</b>, god have mercy on my GPUs. I am also releasing the intermediate checkpoint of this model.
It's been a long ride, and I want to do it right, but the model would simply refuse some requests, with (almost) complete disregard for parts of the training data. Of course, one would argue that some easy prompt engineering will get around it, but the point was to make an unaligned model out of the box. Another point is that I could simply use a faster learning rate on more epochs, which would also work (I've tried that before), but the result would be an overcooked model and, therefore more dumb. So I decided to bite the bullet and do a full proper fine-tuning. This is going to be a serious pain in the ass, but I might as well try to do it right. Since I am releasing the intermediate checkpoint of this model under https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha, I might as well take the time and add some features I haven't seen in other models. In short, besides the normal goodies of logic, some theory of mind, and uncensored content along with general NLP tasks, I will TRY to add a massive dataset (that does not yet exist) of story writing, and a new, completely organic and original Roleplay dataset. LimaRP is awesome, but maybe, just maybe... things are finally carefully extricated from LimaRP, the same sentences will leave its entwined body under the stars towards something new, something fresh. This is going to take some serious effort and some time. Any support will be appreciated, even if it's just some feedback. My electricity bill gonna be huge this month LOL.
Cheers,
Sicarius
</details>
## Intermediate checkpoint of this model:
- (Can still be decent for merges, fairly uncensored): [LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
# Model instruction template: (Can use either ChatML or Llama-3)
# ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```
# Llama-3-Instruct
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
**Recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
</details>
<details>
<summary><b>min_p</b></summary>
max_new_tokens: 512
temperature: 1
top_p: 1
top_k: 0
typical_p: 1
min_p: 0.05
repetition_penalty: 1
do_sample: True
</details>
<details>
<summary><b>Divine Intellect</b></summary>
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
</details>
<details>
<summary><b>simple-1</b></summary>
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
</details>
# Model Details :
<details>
<summary>This was based on several different models, as well as an abliviated model, which after days of finetuning at different Lora R values are probably no longer even recognizable. The result of this intermediate checkpoint is published under <b>SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha</b>, while this model is now fully fine-tuned instead of just a very deep Lora.</summary>
The full fine-tuning is performed on the full LLAMA-3 8k Context. It will not only be used for stacking several different prompts into a total length of 8k but also for using the full context length for single prompts. The training data contains a lot of highly cleaned, highest-quality story writing, and some RP.
Of course, a massive and deep uncensoring protocol is used, along with giving the model some sass and personality! A lot of effort was poured into this work to ensure the model is not compromised by the deep uncensoring protocol. The goal is to create a model that is highly creative, serving as a writing assistant, co-editor, and having some role play abilities, while still being fairly intelligent, as much as an 8B model can be.
The most important aspect of this work is to make it fresh, trained on datasets that have never been used in any other model, giving it a truly unique vibe.
</details>
## LLAMA-3_Unaligned is available at the following quantizations:
- FP16: soon...
- EXL2: soon...
- GGUF: soon...
-
## LLAMA-3_8B_Unaligned_Alpha is available at the following quantizations:
- [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
- [GGUFs](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit is appreciated 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit appreciated 🙏🏻
## Disclaimer
*This model is VERY uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
- [Tenebra 30B](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) My original Tenebra model, very unique, 'self aware', very uncensored.
- [Tenebra 13B](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) A smaller Tenebra in 13B, I called it 'Tinybra'
- [Question_Builder](https://huggingface.co/SicariusSicariiStuff/Question_Builder) A small, highly useful model to help our open source community in generating new datasets. It returns a single question based on any input. |
enoubi/XLM-RoBERTa-Reddit-Indonesian-Sarcastic-Few-Shot | enoubi | 2025-06-16T05:44:16Z | 239 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-11T10:04:32Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: XLM-RoBERTa-Reddit-Indonesian-Sarcastic-Few-Shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-Reddit-Indonesian-Sarcastic-Few-Shot
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5229
- Accuracy: 0.7939
- F1: 0.6279
- Precision: 0.5723
- Recall: 0.6955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5268 | 1.0 | 174 | 0.5181 | 0.7548 | 0.5728 | 0.5077 | 0.6572 |
| 0.3978 | 2.0 | 348 | 0.4812 | 0.7945 | 0.6016 | 0.584 | 0.6204 |
| 0.3311 | 3.0 | 522 | 0.4407 | 0.8136 | 0.5778 | 0.6667 | 0.5099 |
| 0.2816 | 4.0 | 696 | 0.4949 | 0.8164 | 0.5959 | 0.6632 | 0.5411 |
| 0.2251 | 5.0 | 870 | 0.5112 | 0.8001 | 0.6289 | 0.5872 | 0.6771 |
| 0.1767 | 6.0 | 1044 | 0.5529 | 0.8101 | 0.5864 | 0.6441 | 0.5382 |
| 0.1383 | 7.0 | 1218 | 0.7084 | 0.8108 | 0.5886 | 0.6453 | 0.5411 |
| 0.1145 | 8.0 | 1392 | 0.7591 | 0.8136 | 0.5922 | 0.6541 | 0.5411 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
s-emanuilov/Tucan-9B-v1.0 | s-emanuilov | 2025-06-16T05:43:17Z | 57 | 1 | null | [
"safetensors",
"gemma2",
"function_calling",
"MCP",
"tool_use",
"bg",
"arxiv:2503.23278",
"arxiv:2408.00118",
"arxiv:2412.10893",
"base_model:INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0",
"base_model:finetune:INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0",
"license:gemma",
"region:us"
] | null | 2025-06-08T07:22:38Z | ---
license: gemma
language:
- bg
base_model:
- INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0
tags:
- function_calling
- MCP
- tool_use
---
# Tucan-9B-v1.0
## Bulgarian Language Models for Function Calling 🇧🇬
> 📄 **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
## Overview 🚀
TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
Built on top of [BgGPT models](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe) from [INSAIT Institute](https://insait.ai/), which were themselves built on [Gemma 2](https://arxiv.org/pdf/2408.00118), Tucan models have been enhanced with function-calling capabilities.
## Motivation 🎯
Although BgGPT models demonstrate [strong Bulgarian language comprehension](https://arxiv.org/pdf/2412.10893), they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.
This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.
## Models and variants 📦
Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:
<div align="center">
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|------------|------------|--------------|------------------|
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) |
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) 📍| [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
📍 *Current model/repo*
</div>
Models and quantizations are also available for easy use in Ollama: https://ollama.com/s_emanuilov/tucan
## Benchmarks 📊
All evaluations were performed using the [Tucan evaluation framework](https://github.com/s-emanuilov/tucan), with results averaged across multiple runs. Tucan models demonstrate superior function-calling capabilities compared to their BgGPT counterparts, with particularly strong improvements in smaller model sizes. To ensure no catastrophic forgetting occurred, we evaluated knowledge retention using [EleutherAI's lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on Bulgarian benchmarks, confirming that each Tucan model maintains performance on par with its BgGPT equivalent.
<div align="center">
| Model | Function Calling | HellaswagBG | WinograndeBG | ARC-Easy-BG | ARC-Challenge-BG |
|-------|-----------------|-------------|--------------|-------------|------------------|
| **Tucan-2.6B-v1.0** 🔥 | **0.7875** | 0.5924 | 0.6456 | 0.5657 | 0.3754 |
| **Tucan-9B-v1.0** 🔥 | **0.8667** | 0.7046 | 0.7151 | 0.7024 | 0.5188 |
| **Tucan-27B-v1.0** 🔥 | **0.875** | 0.6179 | 0.6275 | 0.6486 | 0.442 |
| BgGPT-Gemma-2-2.6B-IT-v1.0 | 0.5874 | 0.6306 | 0.5821 | 0.5657 | 0.372 |
| BgGPT-Gemma-2-9B-IT-v1.0 | 0.7833 | 0.7057 | 0.719 | 0.7231 | 0.5188 |
| BgGPT-Gemma-2-27B-IT-v1.0 | 0.8667 | 0.62 | 0.6212 | 0.6587 | 0.459 |
*Note: 27B models were evaluated in 8-bit precision for comparison purposes.*
</div>
## Usage 🛠️
### Quick start ⚡
```bash
pip install -U "transformers[torch]" accelerate bitsandbytes
```
### Prompt format ⚙️
**Critical:** Use this format for function calling for the best results.
<details>
<summary><strong>📋 Required system prompt template</strong></summary>
```
<bos><start_of_turn>user
Ти си полезен AI асистент, който предоставя полезни и точни отговори.
Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо.
Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
## Шаблон за извикване:
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```
## Налични функции:
[your function definitions here]
## Потребителска заявка:
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model
```
</details>
### Note 📝
**The model only generates the `tool_call` blocks with function names and parameters - it doesn't actually execute the functions.** Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in `tool_response` blocks for the conversation to continue the interperation of the results. A full demo is comming soon.
### Python example 🐍
<details>
<summary><strong>💻 Complete Working Example</strong></summary>
```python
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="eager" # Required for Gemma models
)
# Create prompt with system template
def create_prompt(functions, user_query):
system_prompt = """Ти си полезен AI асистент, който предоставя полезни и точни отговори.
Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо.
Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
## Шаблон за извикване:
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
full_prompt = f"{system_prompt}\n## Налични функции:\n{functions_text}\n\n## Потребителска заявка:\n{user_query}"
chat = [{"role": "user", "content": full_prompt}]
return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# Example usage
functions = [{
"name": "create_calendar_event",
"description": "Creates a new event in Google Calendar.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"date": {"type": "string"},
"start_time": {"type": "string"},
"end_time": {"type": "string"}
},
"required": ["title", "date", "start_time", "end_time"]
}
}]
query = "Създай събитие 'Годишен преглед' за 8-ми юни 2025 от 14:00 до 14:30."
# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=2048,
temperature=0.1,
top_k=25,
top_p=1.0,
repetition_penalty=1.1,
do_sample=True,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
pad_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)
```
</details>
## Performance & Dataset 📊
> 📄 **Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper**
**Dataset:** 10,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).
**Results:** Significant improvements in tool-use capabilities over base BgGPT models: 34.1% for 2.6B, 10.6% for 9B, and 1.0% for 27B models in [internal benchmarks](https://github.com/s-emanuilov/tucan). Beyond raw function-calling scores, all Tucan models demonstrate more natural conversational flow while maintaining tool-use capabilities, retaining their base knowledge.
## Acknowledgments 🙏
Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe).
## Questions & Contact 💬
For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.linkedin.com/in/simeon-emanuilov/)** |
enoubi/XLM-RoBERTa-Twitter-Indonesian-Sarcastic-Few-Shot | enoubi | 2025-06-16T05:43:16Z | 250 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-11T04:37:59Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: XLM-RoBERTa-Twitter-Indonesian-Sarcastic-Few-Shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-Twitter-Indonesian-Sarcastic-Few-Shot
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3513
- Accuracy: 0.8717
- F1: 0.7677
- Precision: 0.6994
- Recall: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5833 | 1.0 | 31 | 0.5356 | 0.75 | 0.0 | 0.0 | 0.0 |
| 0.526 | 2.0 | 62 | 0.4851 | 0.75 | 0.0 | 0.0 | 0.0 |
| 0.4795 | 3.0 | 93 | 0.4745 | 0.7724 | 0.1644 | 1.0 | 0.0896 |
| 0.3989 | 4.0 | 124 | 0.3300 | 0.8657 | 0.6667 | 0.8780 | 0.5373 |
| 0.2827 | 5.0 | 155 | 0.3112 | 0.8657 | 0.7391 | 0.7183 | 0.7612 |
| 0.2006 | 6.0 | 186 | 0.2641 | 0.8955 | 0.7705 | 0.8545 | 0.7015 |
| 0.1357 | 7.0 | 217 | 0.3315 | 0.8881 | 0.7917 | 0.7403 | 0.8507 |
| 0.1251 | 8.0 | 248 | 0.4118 | 0.8433 | 0.7308 | 0.6404 | 0.8507 |
| 0.0643 | 9.0 | 279 | 0.4539 | 0.8918 | 0.7642 | 0.8393 | 0.7015 |
| 0.046 | 10.0 | 310 | 0.5066 | 0.8694 | 0.7518 | 0.7162 | 0.7910 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
alibabasglab/a_tflocoformer | alibabasglab | 2025-06-16T05:41:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:29:11Z | ---
license: apache-2.0
---
|
New-parveen-virals/Video.parveen.viral.video.bilasipara.new.video.parbin.bilasipara.viral.video.link | New-parveen-virals | 2025-06-16T05:41:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:36:26Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
parveen-ka-viral-video/Original.18.parveen.viral.video.bilasipara.new.video.parbin.bilasipara.viral.video.link | parveen-ka-viral-video | 2025-06-16T05:40:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:36:14Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
ToastyPigeon/a-glm-train-mid-backup | ToastyPigeon | 2025-06-16T05:40:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"glm4",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:adapter:THUDM/GLM-4-32B-0414",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-16T05:39:11Z | ---
base_model: THUDM/GLM-4-32B-0414
library_name: peft
---
40% Epoch checkpoint (~40M tokens seen). Producing some interesting output but inconsistent, potential target for stabilizing RL. Saving this in case it gets worse later. |
VIDEO-mezzo-fun-viral-video-Clip-Original/mezzo.fun.viral.video.Link.viral.On.Social.Media | VIDEO-mezzo-fun-viral-video-Clip-Original | 2025-06-16T05:39:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:39:21Z | <a href="https://t.co/dTvnXACQMR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.05_epoch2 | MinaMila | 2025-06-16T05:39:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:37:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kleverer/natix-009 | kleverer | 2025-06-16T05:38:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-16T05:38:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yashkalu/deep_learning | yashkalu | 2025-06-16T05:37:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:37:47Z | ---
license: apache-2.0
---
|
Subsets and Splits