modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
phospho-app/jmota27-ACT_BBOX-boats_datasets-qzu3c | phospho-app | 2025-06-16T07:18:01Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T07:14:22Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black' was detected in 0 episodes in secondary_0 camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/jmota27/boats_datasets/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [jmota27/boats_datasets](https://huggingface.co/datasets/jmota27/boats_datasets)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF | Triangle104 | 2025-06-16T07:17:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:15:07Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q5_K_S-GGUF --hf-file q3-8b-kintsugi-q5_k_s.gguf -c 2048
```
|
allura-org/Q3-8B-Kintsugi | allura-org | 2025-06-16T07:16:59Z | 8 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T21:25:09Z | ---
license: apache-2.0
base_model: Qwen/Qwen3-8B-Base
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Q3-8B-Kintsugi

<small><i>get it? because kintsugi sounds like kitsune? hahaha-</i></small>
# Overview
***Q3-8B-Kintsugi*** is a roleplaying model finetuned from [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
# Quantizations
EXL3:
- [Official EXL3 quant repo](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-EXL3)
GGUF:
- [Official static GGUF quants](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-GGUF)
MLX:
- [8, 6, and 4bpw MLX-formrt quants by soundTeam](https://huggingface.co/collections/allura-quants/q3-8b-kintsugi-mlx-684fc48444f1214749f538c4)
# Usage
- Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do *not* need to prefill empty think tags for it not to reason -- see below).
- Settings used by testers varied, but we generally stayed around 0.9 temperature and 0.1 min p. Do *not* use repetition penalties (DRY included). They break it.
- Any system prompt can likely be used, but I used the Shingame system prompt (link will be added later i promise)
- The official instruction following version of Qwen3-8B was not used as a base. Instruction-following is trained in post-hoc, and "thinking" traces were not included. __As a result of this, "thinking" will not function.__
# Training Process
1. The [base model](https://huggingface.co/Qwen/Qwen3-8B-Base) first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the [Ink](https://huggingface.co/collections/allura-org/ink-6772fd1442308781594bbabb)/[Bigger Body](https://huggingface.co/collections/allura-org/bigger-body-67b277af0861cec33b54745d)/[Remnant](https://huggingface.co/collections/allura-org/remnant-6817c2113bbb2aed501513d0) lineage.
2. Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".
Both stages here are very similar to [Q3-30B-A3B-Designant](https://huggingface.co/allura-org/Q3-30B-A3B-Designant), which went through a very similar process with the same data.
# Credits
- Fizz - Training, Data Wrangling
- Toaster, Mango, Bot, probably others I forgot ;-; - Testing
- inflatebot - original Designant model card that this one was yoinked from
- Artus - Funding
- Alibaba - Making the original model
- Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
- All quanters, inside and outside the org, specifically Artus, Lyra, and soundTeam/Heni
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3 |
Wilbur1240/ppo-pyramid | Wilbur1240 | 2025-06-16T07:16:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-06-16T07:16:31Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wilbur1240/ppo-pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Sumail/Eurus9 | Sumail | 2025-06-16T07:15:20Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-22T09:24:47Z | ---
base_model:
- itorgov/model-1723976476
- itorgov/model-1723975614
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [itorgov/model-1723976476](https://huggingface.co/itorgov/model-1723976476)
* [itorgov/model-1723975614](https://huggingface.co/itorgov/model-1723975614)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: itorgov/model-1723975614
layer_range: [0, 48]
- model: itorgov/model-1723976476
layer_range: [0, 48]
merge_method: slerp
base_model: itorgov/model-1723975614
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF | Triangle104 | 2025-06-16T07:14:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:39:48Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -c 2048
```
|
phospho-app/jmota27-ACT_BBOX-boats_datasets-u50na | phospho-app | 2025-06-16T07:10:27Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T07:05:08Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black boat' was detected in 0 episodes in secondary_0 camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/jmota27/boats_datasets/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [jmota27/boats_datasets](https://huggingface.co/datasets/jmota27/boats_datasets)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch2 | MinaMila | 2025-06-16T07:08:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NICFRU/nc_ner_bert_model_german_alle_ner_tags | NICFRU | 2025-06-16T07:08:17Z | 0 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-06-16T07:05:43Z | # nc_ner_bert_model
This model is a fine-tuned version of bert-base-german-cased on the german-ler dataset.
It achieves the following results on the evaluation set:
Loss: 0.011337515898048878
F1: 0.9723312768741821
Precision: 0.9669083472764455
Recall: 0.9778153788306072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- num_epochs: 3
### Training results
| loss | epoch | step | eval_loss | eval_f1 | eval_precision | eval_recall |
| --- | --- | --- | --- | --- | --- | --- |
| 0.7592 | 0.029958058717795086 | 50 | nan | nan | nan | nan |
| 0.1908 | 0.05991611743559017 | 100 | nan | nan | nan | nan |
| 0.1288 | 0.08987417615338526 | 150 | nan | nan | nan | nan |
| 0.0877 | 0.11983223487118035 | 200 | nan | nan | nan | nan |
| nan | 0.11983223487118035 | 200 | 0.07891597598791122 | 0.71008601947254 | 0.6772448611611973 | 0.7462745877210412 |
| 0.0782 | 0.14979029358897544 | 250 | nan | nan | nan | nan |
| 0.071 | 0.17974835230677053 | 300 | nan | nan | nan | nan |
| 0.0605 | 0.2097064110245656 | 350 | nan | nan | nan | nan |
| 0.0491 | 0.2396644697423607 | 400 | nan | nan | nan | nan |
| nan | 0.2396644697423607 | 400 | 0.05076289176940918 | 0.799463190184049 | 0.7723652528245971 | 0.828531690840453 |
| 0.0471 | 0.2696225284601558 | 450 | nan | nan | nan | nan |
| 0.0463 | 0.2995805871779509 | 500 | nan | nan | nan | nan |
| 0.0544 | 0.32953864589574594 | 550 | nan | nan | nan | nan |
| 0.0355 | 0.35949670461354105 | 600 | nan | nan | nan | nan |
| nan | 0.35949670461354105 | 600 | 0.03874693065881729 | 0.8478779208392943 | 0.8152971386647102 | 0.8831710709318498 |
| 0.0479 | 0.3894547633313361 | 650 | nan | nan | nan | nan |
| 0.0343 | 0.4194128220491312 | 700 | nan | nan | nan | nan |
| 0.033 | 0.44937088076692633 | 750 | nan | nan | nan | nan |
| 0.0367 | 0.4793289394847214 | 800 | nan | nan | nan | nan |
| nan | 0.4793289394847214 | 800 | 0.03735911101102829 | 0.8572247706422018 | 0.8258147670778863 | 0.891118617126962 |
| 0.0343 | 0.5092869982025164 | 850 | nan | nan | nan | nan |
| 0.0361 | 0.5392450569203115 | 900 | nan | nan | nan | nan |
| 0.0408 | 0.5692031156381067 | 950 | nan | nan | nan | nan |
| 0.0315 | 0.5991611743559018 | 1000 | nan | nan | nan | nan |
| nan | 0.5991611743559018 | 1000 | 0.030343208461999893 | 0.8759884281581485 | 0.8510399100618324 | 0.902443870454997 |
| 0.036 | 0.6291192330736968 | 1050 | nan | nan | nan | nan |
| 0.0225 | 0.6590772917914919 | 1100 | nan | nan | nan | nan |
| 0.0275 | 0.689035350509287 | 1150 | nan | nan | nan | nan |
| 0.0296 | 0.7189934092270821 | 1200 | nan | nan | nan | nan |
| nan | 0.7189934092270821 | 1200 | 0.03183047100901604 | 0.8875096974398758 | 0.866830839174086 | 0.9091992847208424 |
| 0.04 | 0.7489514679448772 | 1250 | nan | nan | nan | nan |
| 0.0281 | 0.7789095266626722 | 1300 | nan | nan | nan | nan |
| 0.0433 | 0.8088675853804673 | 1350 | nan | nan | nan | nan |
| 0.0299 | 0.8388256440982624 | 1400 | nan | nan | nan | nan |
| nan | 0.8388256440982624 | 1400 | 0.027965761721134186 | 0.8886951467596629 | 0.8671077504725898 | 0.9113848599244984 |
| 0.0381 | 0.8687837028160575 | 1450 | nan | nan | nan | nan |
| 0.0298 | 0.8987417615338527 | 1500 | nan | nan | nan | nan |
| 0.0278 | 0.9286998202516477 | 1550 | nan | nan | nan | nan |
| 0.0261 | 0.9586578789694428 | 1600 | nan | nan | nan | nan |
| nan | 0.9586578789694428 | 1600 | 0.028123166412115097 | 0.9013783731314309 | 0.8811918770165117 | 0.9225114245976554 |
| 0.0262 | 0.9886159376872379 | 1650 | nan | nan | nan | nan |
| 0.0178 | 1.0185739964050329 | 1700 | nan | nan | nan | nan |
| 0.0132 | 1.048532055122828 | 1750 | nan | nan | nan | nan |
| 0.0157 | 1.078490113840623 | 1800 | nan | nan | nan | nan |
| nan | 1.078490113840623 | 1800 | 0.028561240062117577 | 0.9077876791941109 | 0.8856548856548857 | 0.9310550367574012 |
| 0.0132 | 1.1084481725584183 | 1850 | nan | nan | nan | nan |
| 0.0155 | 1.1384062312762133 | 1900 | nan | nan | nan | nan |
| 0.0129 | 1.1683642899940083 | 1950 | nan | nan | nan | nan |
| 0.0148 | 1.1983223487118035 | 2000 | nan | nan | nan | nan |
| nan | 1.1983223487118035 | 2000 | 0.026516983285546303 | 0.9101913315111284 | 0.8946459412780656 | 0.9262865090403338 |
| 0.0106 | 1.2282804074295985 | 2050 | nan | nan | nan | nan |
| 0.0108 | 1.2582384661473935 | 2100 | nan | nan | nan | nan |
| 0.0165 | 1.2881965248651888 | 2150 | nan | nan | nan | nan |
| 0.0167 | 1.3181545835829838 | 2200 | nan | nan | nan | nan |
| nan | 1.3181545835829838 | 2200 | 0.02606791816651821 | 0.9090380703283929 | 0.8869565217391304 | 0.932247168686668 |
| 0.0151 | 1.348112642300779 | 2250 | nan | nan | nan | nan |
| 0.0169 | 1.378070701018574 | 2300 | nan | nan | nan | nan |
| 0.0165 | 1.4080287597363692 | 2350 | nan | nan | nan | nan |
| 0.0127 | 1.4379868184541642 | 2400 | nan | nan | nan | nan |
| nan | 1.4379868184541642 | 2400 | 0.02857920527458191 | 0.9142130490071408 | 0.9003853564547206 | 0.9284720842439896 |
| 0.0131 | 1.4679448771719592 | 2450 | nan | nan | nan | nan |
| 0.0147 | 1.4979029358897544 | 2500 | nan | nan | nan | nan |
| 0.0131 | 1.5278609946075494 | 2550 | nan | nan | nan | nan |
| 0.0116 | 1.5578190533253444 | 2600 | nan | nan | nan | nan |
| nan | 1.5578190533253444 | 2600 | 0.0249184537678957 | 0.9242824958370065 | 0.9115146831530139 | 0.9374130737134909 |
| 0.0166 | 1.5877771120431396 | 2650 | nan | nan | nan | nan |
| 0.0145 | 1.6177351707609346 | 2700 | nan | nan | nan | nan |
| 0.0152 | 1.6476932294787296 | 2750 | nan | nan | nan | nan |
| 0.0119 | 1.6776512881965249 | 2800 | nan | nan | nan | nan |
| nan | 1.6776512881965249 | 2800 | 0.024047361686825752 | 0.9198402649264634 | 0.9021780664883454 | 0.9382078283330022 |
| 0.0157 | 1.70760934691432 | 2850 | nan | nan | nan | nan |
| 0.0139 | 1.737567405632115 | 2900 | nan | nan | nan | nan |
| 0.0161 | 1.76752546434991 | 2950 | nan | nan | nan | nan |
| 0.0118 | 1.7974835230677053 | 3000 | nan | nan | nan | nan |
| nan | 1.7974835230677053 | 3000 | 0.02312026545405388 | 0.9283470749901845 | 0.9173617846750728 | 0.9395986489171468 |
| 0.0097 | 1.8274415817855003 | 3050 | nan | nan | nan | nan |
| 0.016 | 1.8573996405032953 | 3100 | nan | nan | nan | nan |
| 0.013 | 1.8873576992210905 | 3150 | nan | nan | nan | nan |
| 0.0133 | 1.9173157579388855 | 3200 | nan | nan | nan | nan |
| nan | 1.9173157579388855 | 3200 | 0.023281875997781754 | 0.9217849819353578 | 0.9062980030721967 | 0.9378104510232466 |
| 0.0175 | 1.9472738166566805 | 3250 | nan | nan | nan | nan |
| 0.0154 | 1.9772318753744758 | 3300 | nan | nan | nan | nan |
| 0.0096 | 2.007189934092271 | 3350 | nan | nan | nan | nan |
| 0.0057 | 2.0371479928100658 | 3400 | nan | nan | nan | nan |
| nan | 2.0371479928100658 | 3400 | 0.023734014481306076 | 0.9255650818394388 | 0.9080481743452494 | 0.9437711106695807 |
| 0.0083 | 2.067106051527861 | 3450 | nan | nan | nan | nan |
| 0.0081 | 2.097064110245656 | 3500 | nan | nan | nan | nan |
| 0.0075 | 2.127022168963451 | 3550 | nan | nan | nan | nan |
| 0.0057 | 2.156980227681246 | 3600 | nan | nan | nan | nan |
| nan | 2.156980227681246 | 3600 | 0.025288647040724754 | 0.9282016215688189 | 0.9129515757109915 | 0.9439697993244586 |
| 0.0049 | 2.1869382863990414 | 3650 | nan | nan | nan | nan |
| 0.0076 | 2.2168963451168366 | 3700 | nan | nan | nan | nan |
| 0.0067 | 2.2468544038346314 | 3750 | nan | nan | nan | nan |
| 0.009 | 2.2768124625524266 | 3800 | nan | nan | nan | nan |
| nan | 2.2768124625524266 | 3800 | 0.025000886991620064 | 0.926643935703848 | 0.9090214067278287 | 0.9449632425988476 |
| 0.0067 | 2.306770521270222 | 3850 | nan | nan | nan | nan |
| 0.0081 | 2.3367285799880166 | 3900 | nan | nan | nan | nan |
| 0.0073 | 2.366686638705812 | 3950 | nan | nan | nan | nan |
| 0.0059 | 2.396644697423607 | 4000 | nan | nan | nan | nan |
| nan | 2.396644697423607 | 4000 | 0.024387583136558533 | 0.9357906087638466 | 0.9235681114551083 | 0.9483409497317703 |
| 0.0073 | 2.426602756141402 | 4050 | nan | nan | nan | nan |
| 0.0075 | 2.456560814859197 | 4100 | nan | nan | nan | nan |
| 0.0052 | 2.4865188735769923 | 4150 | nan | nan | nan | nan |
| 0.006 | 2.516476932294787 | 4200 | nan | nan | nan | nan |
| nan | 2.516476932294787 | 4200 | 0.024684011936187744 | 0.9356576241308392 | 0.9225569718037853 | 0.9491357043512816 |
| 0.0088 | 2.5464349910125823 | 4250 | nan | nan | nan | nan |
| 0.0122 | 2.5763930497303775 | 4300 | nan | nan | nan | nan |
| 0.008 | 2.6063511084481723 | 4350 | nan | nan | nan | nan |
| 0.0072 | 2.6363091671659675 | 4400 | nan | nan | nan | nan |
| nan | 2.6363091671659675 | 4400 | 0.02404804341495037 | 0.9307684796406601 | 0.9151305683563749 | 0.9469501291476257 |
| 0.0075 | 2.6662672258837627 | 4450 | nan | nan | nan | nan |
| 0.0072 | 2.696225284601558 | 4500 | nan | nan | nan | nan |
| 0.0061 | 2.7261833433193527 | 4550 | nan | nan | nan | nan |
| 0.0057 | 2.756141402037148 | 4600 | nan | nan | nan | nan |
| nan | 2.756141402037148 | 4600 | 0.026269957423210144 | 0.9321470473210794 | 0.9176130895091434 | 0.9471488178025035 |
| 0.0038 | 2.786099460754943 | 4650 | nan | nan | nan | nan |
| 0.008 | 2.8160575194727384 | 4700 | nan | nan | nan | nan |
| 0.0073 | 2.846015578190533 | 4750 | nan | nan | nan | nan |
| 0.0065 | 2.8759736369083284 | 4800 | nan | nan | nan | nan |
| nan | 2.8759736369083284 | 4800 | 0.02427930384874344 | 0.9350088356567839 | 0.9241218707549 | 0.9461553745281145 |
| 0.0056 | 2.9059316956261236 | 4850 | nan | nan | nan | nan |
| 0.0074 | 2.9358897543439184 | 4900 | nan | nan | nan | nan |
| 0.0059 | 2.9658478130617136 | 4950 | nan | nan | nan | nan |
| 0.006 | 2.995805871779509 | 5000 | nan | nan | nan | nan |
| nan | 2.995805871779509 | 5000 | 0.025616737082600594 | 0.9314559499364427 | 0.9170196380438969 | 0.9463540631829922 |
| 0.004 | 3.0257639304973036 | 5050 | nan | nan | nan | nan |
| 0.0047 | 3.055721989215099 | 5100 | nan | nan | nan | nan |
| 0.0026 | 3.085680047932894 | 5150 | nan | nan | nan | nan |
| 0.0047 | 3.115638106650689 | 5200 | nan | nan | nan | nan |
| nan | 3.115638106650689 | 5200 | 0.02595394104719162 | 0.935866053069617 | 0.9225868725868726 | 0.9495330816610371 |
| 0.0036 | 3.145596165368484 | 5250 | nan | nan | nan | nan |
| 0.0035 | 3.1755542240862793 | 5300 | nan | nan | nan | nan |
| 0.0034 | 3.205512282804074 | 5350 | nan | nan | nan | nan |
| 0.0025 | 3.2354703415218693 | 5400 | nan | nan | nan | nan |
| nan | 3.2354703415218693 | 5400 | 0.02661316469311714 | 0.9393134651322907 | 0.9300740163615115 | 0.9487383270415259 |
| 0.0041 | 3.2654284002396645 | 5450 | nan | nan | nan | nan |
| 0.0033 | 3.2953864589574597 | 5500 | nan | nan | nan | nan |
| 0.0042 | 3.3253445176752545 | 5550 | nan | nan | nan | nan |
| 0.0033 | 3.3553025763930497 | 5600 | nan | nan | nan | nan |
| nan | 3.3553025763930497 | 5600 | 0.02527858316898346 | 0.9387915764613265 | 0.9300058490933906 | 0.947744883767137 |
| 0.0016 | 3.385260635110845 | 5650 | nan | nan | nan | nan |
| 0.0027 | 3.4152186938286397 | 5700 | nan | nan | nan | nan |
| 0.0032 | 3.445176752546435 | 5750 | nan | nan | nan | nan |
| 0.0054 | 3.47513481126423 | 5800 | nan | nan | nan | nan |
| nan | 3.47513481126423 | 5800 | 0.026085887104272842 | 0.9393850083505256 | 0.9290711232024874 | 0.9499304589707928 |
| 0.0041 | 3.5050928699820254 | 5850 | nan | nan | nan | nan |
| 0.0031 | 3.53505092869982 | 5900 | nan | nan | nan | nan |
| 0.0043 | 3.5650089874176154 | 5950 | nan | nan | nan | nan |
| 0.0036 | 3.5949670461354106 | 6000 | nan | nan | nan | nan |
| nan | 3.5949670461354106 | 6000 | 0.026285560801625252 | 0.9379647749510763 | 0.9240408714092925 | 0.9523147228293265 |
| 0.003 | 3.6249251048532054 | 6050 | nan | nan | nan | nan |
| 0.0036 | 3.6548831635710006 | 6100 | nan | nan | nan | nan |
| 0.003 | 3.684841222288796 | 6150 | nan | nan | nan | nan |
| 0.0018 | 3.7147992810065906 | 6200 | nan | nan | nan | nan |
| nan | 3.7147992810065906 | 6200 | 0.026966776698827744 | 0.9423455332546242 | 0.9333463262521926 | 0.9515199682098152 |
| 0.002 | 3.744757339724386 | 6250 | nan | nan | nan | nan |
| 0.0031 | 3.774715398442181 | 6300 | nan | nan | nan | nan |
| 0.0026 | 3.804673457159976 | 6350 | nan | nan | nan | nan |
| 0.0032 | 3.834631515877771 | 6400 | nan | nan | nan | nan |
| nan | 3.834631515877771 | 6400 | 0.02665964514017105 | 0.9404937543031378 | 0.9312426957537983 | 0.9499304589707928 |
| 0.0046 | 3.8645895745955663 | 6450 | nan | nan | nan | nan |
| 0.0037 | 3.894547633313361 | 6500 | nan | nan | nan | nan |
| 0.006 | 3.9245056920311563 | 6550 | nan | nan | nan | nan |
| 0.0041 | 3.9544637507489515 | 6600 | nan | nan | nan | nan |
| nan | 3.9544637507489515 | 6600 | 0.025271492078900337 | 0.9411996066863323 | 0.9316721822075141 | 0.9509239022451818 |
| 0.0028 | 3.9844218094667463 | 6650 | nan | nan | nan | nan |
| 0.003 | 4.014379868184542 | 6700 | nan | nan | nan | nan |
| 0.0027 | 4.044337926902337 | 6750 | nan | nan | nan | nan |
| 0.0017 | 4.0742959856201315 | 6800 | nan | nan | nan | nan |
| nan | 4.0742959856201315 | 6800 | 0.026743704453110695 | 0.9429133858267716 | 0.934269553345036 | 0.951718656864693 |
| 0.0029 | 4.104254044337927 | 6850 | nan | nan | nan | nan |
| 0.0025 | 4.134212103055722 | 6900 | nan | nan | nan | nan |
| 0.0017 | 4.164170161773517 | 6950 | nan | nan | nan | nan |
| 0.0015 | 4.194128220491312 | 7000 | nan | nan | nan | nan |
| nan | 4.194128220491312 | 7000 | 0.026866145431995392 | 0.9408062930186825 | 0.9312828499124002 | 0.9505265249354262 |
| 0.0014 | 4.224086279209107 | 7050 | nan | nan | nan | nan |
| 0.0023 | 4.254044337926902 | 7100 | nan | nan | nan | nan |
| 0.0034 | 4.284002396644698 | 7150 | nan | nan | nan | nan |
| 0.0027 | 4.313960455362492 | 7200 | nan | nan | nan | nan |
| nan | 4.313960455362492 | 7200 | 0.026673471555113792 | 0.9432142505658893 | 0.9344773790951638 | 0.9521160341744487 |
| 0.001 | 4.343918514080288 | 7250 | nan | nan | nan | nan |
| 0.0016 | 4.373876572798083 | 7300 | nan | nan | nan | nan |
| 0.0061 | 4.403834631515878 | 7350 | nan | nan | nan | nan |
| 0.0015 | 4.433792690233673 | 7400 | nan | nan | nan | nan |
| nan | 4.433792690233673 | 7400 | 0.026809940114617348 | 0.9424722194906088 | 0.9330218068535826 | 0.9521160341744487 |
| 0.0022 | 4.463750748951468 | 7450 | nan | nan | nan | nan |
| 0.001 | 4.493708807669263 | 7500 | nan | nan | nan | nan |
| 0.0015 | 4.5236668663870585 | 7550 | nan | nan | nan | nan |
| 0.0019 | 4.553624925104853 | 7600 | nan | nan | nan | nan |
| nan | 4.553624925104853 | 7600 | 0.02733566425740719 | 0.9416846652267818 | 0.9307199689501261 | 0.9529107887939599 |
| 0.0012 | 4.583582983822648 | 7650 | nan | nan | nan | nan |
| 0.0025 | 4.613541042540444 | 7700 | nan | nan | nan | nan |
| 0.001 | 4.6434991012582385 | 7750 | nan | nan | nan | nan |
| 0.0015 | 4.673457159976033 | 7800 | nan | nan | nan | nan |
| nan | 4.673457159976033 | 7800 | 0.027779242023825645 | 0.9427390791027155 | 0.9337361138179692 | 0.9519173455195709 |
| 0.0021 | 4.703415218693829 | 7850 | nan | nan | nan | nan |
| 0.0031 | 4.733373277411624 | 7900 | nan | nan | nan | nan |
| 0.0017 | 4.7633313361294185 | 7950 | nan | nan | nan | nan |
| 0.0022 | 4.793289394847214 | 8000 | nan | nan | nan | nan |
| nan | 4.793289394847214 | 8000 | 0.02728326804935932 | 0.9410609037328094 | 0.9306392073052263 | 0.951718656864693 |
| 0.0023 | 4.823247453565009 | 8050 | nan | nan | nan | nan |
| 0.0019 | 4.853205512282804 | 8100 | nan | nan | nan | nan |
| 0.0011 | 4.883163571000599 | 8150 | nan | nan | nan | nan |
| 0.0028 | 4.913121629718394 | 8200 | nan | nan | nan | nan |
| nan | 4.913121629718394 | 8200 | 0.02626235969364643 | 0.9436411920920625 | 0.9343591741332294 | 0.9531094774488377 |
| 0.002 | 4.943079688436189 | 8250 | nan | nan | nan | nan |
| 0.0013 | 4.973037747153985 | 8300 | nan | nan | nan | nan |
| 0.0023 | 5.002995805871779 | 8350 | nan | nan | nan | nan |
| 0.001 | 5.032953864589574 | 8400 | nan | nan | nan | nan |
| nan | 5.032953864589574 | 8400 | 0.02664945460855961 | 0.9427953607234125 | 0.9328924333787201 | 0.9529107887939599 |
| 0.0008 | 5.06291192330737 | 8450 | nan | nan | nan | nan |
| 0.001 | 5.092869982025165 | 8500 | nan | nan | nan | nan |
| 0.0019 | 5.12282804074296 | 8550 | nan | nan | nan | nan |
| 0.0016 | 5.152786099460755 | 8600 | nan | nan | nan | nan |
| nan | 5.152786099460755 | 8600 | 0.02724417671561241 | 0.942354905234214 | 0.9316504854368932 | 0.9533081661037155 |
| 0.001 | 5.18274415817855 | 8650 | nan | nan | nan | nan |
| 0.001 | 5.2127022168963455 | 8700 | nan | nan | nan | nan |
| 0.0008 | 5.24266027561414 | 8750 | nan | nan | nan | nan |
| 0.0009 | 5.272618334331935 | 8800 | nan | nan | nan | nan |
| nan | 5.272618334331935 | 8800 | 0.027002455666661263 | 0.9444772593030125 | 0.936 | 0.9531094774488377 |
| 0.0015 | 5.302576393049731 | 8850 | nan | nan | nan | nan |
| 0.0027 | 5.3325344517675255 | 8900 | nan | nan | nan | nan |
| 0.0015 | 5.36249251048532 | 8950 | nan | nan | nan | nan |
| 0.0018 | 5.392450569203116 | 9000 | nan | nan | nan | nan |
| nan | 5.392450569203116 | 9000 | 0.026851218193769455 | 0.9485511531638084 | 0.941130451789556 | 0.9560898072720048 |
| 0.0011 | 5.422408627920911 | 9050 | nan | nan | nan | nan |
| 0.001 | 5.4523666866387055 | 9100 | nan | nan | nan | nan |
| 0.0009 | 5.482324745356501 | 9150 | nan | nan | nan | nan |
| 0.0022 | 5.512282804074296 | 9200 | nan | nan | nan | nan |
| nan | 5.512282804074296 | 9200 | 0.026996750384569168 | 0.9460842188114915 | 0.9370493081270708 | 0.9552950526524936 |
| 0.0018 | 5.542240862792091 | 9250 | nan | nan | nan | nan |
| 0.0008 | 5.572198921509886 | 9300 | nan | nan | nan | nan |
| 0.0017 | 5.602156980227681 | 9350 | nan | nan | nan | nan |
| 0.0009 | 5.632115038945477 | 9400 | nan | nan | nan | nan |
| nan | 5.632115038945477 | 9400 | 0.02744028903543949 | 0.9479802955665025 | 0.9401993355481728 | 0.955891118617127 |
| 0.0012 | 5.662073097663272 | 9450 | nan | nan | nan | nan |
| 0.0016 | 5.692031156381066 | 9500 | nan | nan | nan | nan |
| 0.0007 | 5.721989215098862 | 9550 | nan | nan | nan | nan |
| 0.0006 | 5.751947273816657 | 9600 | nan | nan | nan | nan |
| nan | 5.751947273816657 | 9600 | 0.027505146339535713 | 0.9460523725142744 | 0.937560975609756 | 0.9546989866878601 |
| 0.0007 | 5.781905332534452 | 9650 | nan | nan | nan | nan |
| 0.0007 | 5.811863391252247 | 9700 | nan | nan | nan | nan |
| 0.0007 | 5.841821449970042 | 9750 | nan | nan | nan | nan |
| 0.001 | 5.871779508687837 | 9800 | nan | nan | nan | nan |
| nan | 5.871779508687837 | 9800 | 0.02771810069680214 | 0.9461561177281229 | 0.9375731564572767 | 0.954897675342738 |
| 0.0011 | 5.9017375674056325 | 9850 | nan | nan | nan | nan |
| 0.001 | 5.931695626123427 | 9900 | nan | nan | nan | nan |
| 0.0008 | 5.961653684841222 | 9950 | nan | nan | nan | nan |
| 0.002 | 5.991611743559018 | 10000 | nan | nan | nan | nan |
| nan | 5.991611743559018 | 10000 | 0.027671782299876213 | 0.946808510638298 | 0.9388552451650714 | 0.954897675342738 |
| nan | 6.0 | 10014 | nan | nan | nan | nan |
## Framework versions
- Transformers: 2.3.0
- Pytorch: (siehe Umgebung)
- Datasets: (siehe Umgebung)
- Tokenizers: (siehe Umgebung)
|
LarryAIDraw/ChamSkirkPonyXL | LarryAIDraw | 2025-06-16T07:06:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:48Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/503333/skirk-or-genshin-impact-or-pony-xl |
LarryAIDraw/Skirk_v2.0_pony-000034 | LarryAIDraw | 2025-06-16T07:05:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:24Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/598575/genshin-impactskirkpony |
LakshGupta/ppo-LunarLander-v2 | LakshGupta | 2025-06-16T07:05:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T07:04:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.67 +/- 16.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/skirk_genshinPDXL_scarxzys | LarryAIDraw | 2025-06-16T07:04:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:23:01Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1062378/pony-skirk-or-genshin-impact |
LarryAIDraw/Genshin_Yae_Miko | LarryAIDraw | 2025-06-16T07:04:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:22:38Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1662698/genshin-yae-miko |
GAGABIG/CNN | GAGABIG | 2025-06-16T07:02:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-13T09:57:53Z | ---
license: apache-2.0
---
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.75_epoch2 | MinaMila | 2025-06-16T07:00:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:59:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc | BootesVoid | 2025-06-16T07:00:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:00:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMILY01
---
# Cmbyjnk1403Xvrdqsg2Kyovgu_Cmbypcpjv048Qrdqs299Msggc
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMILY01` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMILY01",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc', weight_name='lora.safetensors')
image = pipeline('EMILY01').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc/discussions) to add images that show off what you’ve made with this LoRA.
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch1 | MinaMila | 2025-06-16T07:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:58:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Baron-qui/distilhubert-finetuned-gtzan | Baron-qui | 2025-06-16T06:59:19Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-06-14T00:12:46Z | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5618
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8647 | 1.0 | 150 | 1.7671 | 0.58 |
| 1.0983 | 2.0 | 300 | 1.1722 | 0.65 |
| 0.875 | 3.0 | 450 | 0.9809 | 0.73 |
| 0.596 | 4.0 | 600 | 0.9323 | 0.75 |
| 0.4549 | 5.0 | 750 | 0.6444 | 0.82 |
| 0.1644 | 6.0 | 900 | 0.5420 | 0.85 |
| 0.136 | 7.0 | 1050 | 0.5333 | 0.82 |
| 0.1289 | 8.0 | 1200 | 0.6917 | 0.82 |
| 0.029 | 9.0 | 1350 | 0.5613 | 0.85 |
| 0.0409 | 10.0 | 1500 | 0.5618 | 0.84 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
numiros/Comma-Epsilon-v0.1-exl2 | numiros | 2025-06-16T06:55:19Z | 0 | 0 | null | [
"exl2",
"base_model:numiros/Comma-Epsilon-v0.1",
"base_model:finetune:numiros/Comma-Epsilon-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:47:23Z | ---
license: apache-2.0
base_model:
- numiros/Comma-Epsilon-v0.1
tags:
- exl2
---
[4bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/4bpw)
[5bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/5bpw) |
John6666/satyr-remix-ankara-illustrious-v17-sdxl | John6666 | 2025-06-16T06:55:10Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"paintery",
"styles",
"prompt comphrehension",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:49:23Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- paintery
- styles
- prompt comphrehension
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/974951?modelVersionId=1905968).
This model created by [Labdoge207](https://civitai.com/user/Labdoge207).
|
danielpacheco9468/msa0o | danielpacheco9468 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
rafaelrocha1214/msa0o | rafaelrocha1214 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
ricardoguerreiro1800/msa0o | ricardoguerreiro1800 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
eduardamendes1094/msa0o | eduardamendes1094 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
brunobrito7123/msa0o | brunobrito7123 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
williamneto4753/msao0 | williamneto4753 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
westlake-repl/SaProt_1.3B_AFDB_OMG_NCBI | westlake-repl | 2025-06-16T06:48:45Z | 197 | 0 | null | [
"pytorch",
"safetensors",
"esm",
"license:mit",
"region:us"
] | null | 2025-05-06T12:26:20Z | ---
license: mit
---
We further trained a 1.3 billion parameter version of the SaProt model, setting the context length to 1536 during training, and used a combined dataset of AFDB, OMG_prot50, and NCBI (70% identity filtering), totaling 383 million sequences. The training strategy is similar to that of [SaProt-O](https://github.com/westlake-repl/Denovo-Pinal/wiki/Tutorial), employing multimodal input integration (sequence and structural data) to ensure better alignment with real-world research applications.
Specifically, the training data is a mixture of UniRef50 (40%), OMG (30%), and NCBI (30%).
For sequences from OMG and NCBI lacking corresponding structural information, we employ mask language modeling where the model predicts the masked amino acid tokens.
For the UniRef50 dataset, which includes structural data, we applied four distinct training strategies, each sampled with equal probability (25%):
- Predicting all amino acid tokens given partial masked structural tokens.
- Predicting all amino acid tokens given complete structural tokens.
- Predicting partial amino acid tokens given their amino acid token context and partial masked structural tokens.
- Predicting partial amino acid tokens given their amino acid token context and complete structural tokens.
SaProt_1.3B_AFDB_OMG_NCBI is also a model very useful for protein editing. For instance, if you wish to modify certain regions of your protein—whether natural proteins or de novo designed—you can easily mask these amino acids by inputting partial or complete structures. Remarkably, the model functions effectively even if only sequence data is provided. If you have text data and would like to incorporate it, please refer to [SaProt-T/O](http://113.45.254.183:9527/). The relevant link can be found in the interface of [Pinal](http://www.denovo-pinal.com/).
### Loading model from huggingface
> SaProt_1.3B_AFDB_OMG_NCBI, unlike [SaProt_650M_AF2](https://huggingface.co/westlake-repl/SaProt_650M_AF2), **does not support** loading from the esm library
The following code shows how to load the model.
```
from transformers import EsmTokenizer, EsmForMaskedLM
model_path = "/your/path/to/SaProt_1.3B_AFDB_OMG_NCBI"
tokenizer = EsmTokenizer.from_pretrained(model_path)
model = EsmForMaskedLM.from_pretrained(model_path)
#################### Example ####################
device = "cuda"
model.to(device)
seq = "M#EvVpQpL#VyQdYaKv" # Here "#" represents lower plDDT regions (plddt < 70)
tokens = tokenizer.tokenize(seq)
print(tokens)
inputs = tokenizer(seq, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model(**inputs)
print(outputs.logits.shape)
"""
['M#', 'Ev', 'Vp', 'Qp', 'L#', 'Vy', 'Qd', 'Ya', 'Kv']
torch.Size([1, 11, 446])
"""
``` |
himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455 | himedia | 2025-06-16T06:46:27Z | 0 | 0 | null | [
"safetensors",
"financial",
"credit-rating",
"korean",
"gemma",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:46:19Z | ---
language: ko
license: apache-2.0
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- financial
- credit-rating
- korean
- gemma
- unsloth
- fine-tuned
model_name: FinCreditLlama-3.2-3B
pipeline_tag: text-generation
---
# FinCreditLlama-3.2-3B
## 모델 개요
FinCreditLlama-3.2-3B는 금융 신용 평가를 위해 특별히 설계된 한국어 언어 모델입니다.
**베이스 모델**: unsloth/Llama-3.2-3B-Instruct
**데이터셋**: himedia/financial_dummy_data_v2
**학습 방법**: LoRA (Low-Rank Adaptation)
**학습 일시**: 20250616_064455
## 하이퍼파라미터
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## 사용 방법
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 모델과 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455")
# 간단한 추론 예제
prompt = "고객의 신용등급을 평가해주세요:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 레포지토리명 구성
```
fincredit-lamma3-4b-lr5e05-bs2-r16-steps10-20250616_064455 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_064455
```
- `fincredit-gemma3-4b`: 모델 기본명
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: 학습 스텝
- `20250616_064455`: 학습 시각
## 성능
이 모델은 한국어 금융 텍스트에 대해 파인튜닝되어 신용 평가 관련 질의응답에 특화되어 있습니다.
## 라이선스
Apache 2.0
|
Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF | Alptekinege | 2025-06-16T06:45:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:45:19Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -c 2048
```
|
John6666/noobai-xl-nai-xl-v-pred-colorfixed-v10-sdxl | John6666 | 2025-06-16T06:43:09Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"colorfix",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:36:52Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- colorfix
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1672827?modelVersionId=1893403).
This model created by [Volnovik](https://civitai.com/user/Volnovik).
|
P0L3/cliscibert_scivocab_uncased | P0L3 | 2025-06-16T06:42:22Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"climate-change",
"domain-adaptation",
"masked-language-modeling",
"scientific-nlp",
"transformer",
"BERT",
"SciBERT",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-03-21T09:05:31Z | ---
language: en
license: mit
library_name: transformers
tags:
- climate-change
- domain-adaptation
- masked-language-modeling
- scientific-nlp
- transformer
- BERT
- SciBERT
metrics:
- f1
model-index:
- name: CliSciBERT
results:
- task:
type: text-classification
name: Climate NLP Tasks (ClimaBench)
dataset:
name: ClimaBench
type: benchmark
metrics:
- type: f1
name: Macro F1 (avg)
value: 60.502
---
# CliSciBERT 🌿📚
**CliSciBERT** is a domain-adapted version of [**SciBERT**](https://huggingface.co/allenai/scibert_scivocab_uncased), further pretrained on a curated corpus of peer-reviewed research papers in the climate change domain. It is designed to enhance performance on climate-focused scientific NLP tasks by adapting the general scientific knowledge of SciBERT to the specialized subdomain of climate research.
## 🔍 Overview
- **Base Model**: SciBERT (BERT-base architecture, scientific vocab)
- **Pretraining Method**: Continued pretraining (domain adaptation) using Masked Language Modeling (MLM)
- **Corpus**: Scientific papers focused on climate change and environmental science
- **Tokenizer**: SciBERT tokenizer (unchanged)
- **Language**: English
- **Domain**: Climate change research
## 📊 Performance
Evaluated on **ClimaBench**, a benchmark for climate-focused NLP tasks:
| Metric | Value |
|----------------|--------------|
| Macro F1 (avg) | 60.50|
| Tasks won | 0/7|
| Avg. Std Dev | 0.01772|
Note: While CliSciBERT builds on SciBERT’s scientific grounding, its domain specialization improves relevance for climate-related NLP tasks.
Climate performance model card:
|CliSciBERT||
|---------------------------------|-----------------------------|
| 1. Model publicly available? | Yes |
| 2. Time to train final model | 463h |
| 3. Time for all experiments | 1,226h ~ 51 days |
| 4. Power of GPU and CPU | 0.250 kW + 0.013 kW |
| 5. Location for computations | Croatia |
| 6. Energy mix at location | 224.71 gCO<sub>2</sub>eq/kWh |
| 7. CO$_2$eq for final model | 28 kg CO<sub>2</sub> |
| 8. CO$_2$eq for all experiments | 74 kg CO<sub>2</sub> |
## 🧪 Intended Uses
**Use for:**
- Scientific text classification and relation extraction in climate change literature
- Domain-specific document tagging or summarization
- Supporting knowledge graph population for climate research
**Not recommended for:**
- Non-climate or general news content
- Non-English corpora
- Highly informal or colloquial text
Example:
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
import torch
# Load the pretrained model and tokenizer
model_name = "P0L3/clirebert_clirevocab_uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
# Move model to GPU if available
device = 0 if torch.cuda.is_available() else -1
# Create a fill-mask pipeline
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer, device=device)
# Example input from scientific climate literature
text = "The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth."
# Run prediction
predictions = fill_mask(text)
# Show top predictions
print(text)
print(10*">")
for p in predictions:
print(f"{p['sequence']} — {p['score']:.4f}")
```
Output:
``` shell
The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth.
>>>>>>>>>>
the increase in greenhouse gas ... affected the energy balance of the earth. — 0.3911
the increase in greenhouse gas ... affected the radiative balance of the earth. — 0.2640
the increase in greenhouse gas ... affected the radiation balance of the earth. — 0.1233
the increase in greenhouse gas ... affected the carbon balance of the earth. — 0.0589
the increase in greenhouse gas ... affected the ecological balance of the earth. — 0.0332
```
## ⚠️ Limitations
- Retains SciBERT’s limitations outside the scientific domain
- May inherit biases from climate change literature
- No tokenizer retraining — tokenization optimized for general science, not climate-specific vocabulary
## 🧾 Citation
If you use this model, please cite:
```bibtex
@article{poleksic_etal_2025,
title={Climate Research Domain BERTs: Pretraining, Adaptation, and Evaluation},
author={Poleksić, Andrija and
Martinčić-Ipšić, Sanda},
journal={PREPRINT (Version 1)},
year={2025},
doi={https://doi.org/10.21203/rs.3.rs-6644722/v1}
}
|
P0L3/clirebert_clirevocab_uncased | P0L3 | 2025-06-16T06:41:47Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"climate-change",
"domain-specific",
"masked-language-modeling",
"scientific-nlp",
"transformer",
"BERT",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-01-14T14:33:39Z | ---
language: en
license: mit
library_name: transformers
tags:
- climate-change
- domain-specific
- masked-language-modeling
- scientific-nlp
- transformer
- BERT
metrics:
- f1
model-index:
- name: CliReBERT
results:
- task:
type: text-classification
name: Climate NLP Tasks (ClimaBench)
dataset:
name: ClimaBench
type: benchmark
metrics:
- type: f1
name: Macro F1 (avg)
value: 65.447
---
# CliReBERT 🌍🧠
**CliReBERT (Climate Research BERT)** is a domain-specific BERT model pretrained from scratch on a curated corpus of peer-reviewed climate change research papers. It is built to support natural language processing tasks in climate science and environmental studies.
## 🔍 Overview
- **Architecture**: BERT-base (uncased)
- **Parameters**: ~110M
- **Pretraining Objective**: Masked Language Modeling (MLM)
- **Tokenizer**: Trained from scratch (WordPiece) on the same domain corpus
- **Language**: English
- **Domain**: Climate change research (scientific)
## 📊 Performance
Evaluated on **ClimaBench** (a climate-focused NLP benchmark):
| Metric | Value |
|----------------|------------|
| Macro F1 (avg) | **65.45** |
| Tasks won | 3 / 7 |
| Avg. Std Dev | 0.0118 |
Outperformed baseline models like SciBERT, RoBERTa, and ClimateBERT on key tasks.
Climate performance model card:
|CliReBERT||
|---------------------------------|-----------------------------|
| 1. Model publicly available? | Yes |
| 2. Time to train final model | 463h |
| 3. Time for all experiments | 1,226h ~ 51 days |
| 4. Power of GPU and CPU | 0.250 kW + 0.013 kW |
| 5. Location for computations | Croatia |
| 6. Energy mix at location | 224.71 gCO<sub>2</sub>eq/kWh |
| 7. CO$_2$eq for final model | 28 kg CO<sub>2</sub> |
| 8. CO$_2$eq for all experiments | 74 kg CO<sub>2</sub> |
## 🧪 Intended Uses
**Use for:**
- Scientific information extraction in climate change research
- Classification, relation extraction, and document tagging in climate-related corpora
- Enhancing climate-focused knowledge graph construction
**Not suitable for:**
- General-purpose NLP tasks
- Text outside the scientific environmental domain
- Non-English applications
Example:
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
import torch
# Load the pretrained model and tokenizer
model_name = "P0L3/clirebert_clirevocab_uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
# Move model to GPU if available
device = 0 if torch.cuda.is_available() else -1
# Create a fill-mask pipeline
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer, device=device)
# Example input from scientific climate literature
text = "The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth."
# Run prediction
predictions = fill_mask(text)
# Show top predictions
print(text)
print(10*">")
for p in predictions:
print(f"{p['sequence']} — {p['score']:.4f}")
```
Output:
``` shell
The increase in greenhouse gas emissions has significantly affected the [MASK] balance of the Earth.
>>>>>>>>>>
the increase in greenhouse gas ... affected the energy balance of the earth . — 0.6922
the increase in greenhouse gas ... affected the mass balance of the earth . — 0.0631
the increase in greenhouse gas ... affected the radiation balance of the earth . — 0.0606
the increase in greenhouse gas ... affected the radiative balance of the earth . — 0.0517
the increase in greenhouse gas ... affected the carbon balance of the earth . — 0.0365
```
## ⚠️ Limitations
- Trained only on scientific literature (limited sociopolitical text exposure)
- Monolingual (English)
- May reflect publication biases from the scientific community
## 🧾 Citation
If you use this model, please cite:
```bibtex
@article{poleksic_etal_2025,
title={Climate Research Domain BERTs: Pretraining, Adaptation, and Evaluation},
author={Poleksić, Andrija and
Martinčić-Ipšić, Sanda},
journal={PREPRINT (Version 1)},
year={2025},
doi={https://doi.org/10.21203/rs.3.rs-6644722/v1}
}
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.05_epoch1 | MinaMila | 2025-06-16T06:40:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:38:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
numiros/Comma-Epsilon-v0.1-exl3 | numiros | 2025-06-16T06:39:36Z | 0 | 0 | null | [
"base_model:numiros/Comma-Epsilon-v0.1",
"base_model:finetune:numiros/Comma-Epsilon-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:54:20Z | ---
license: apache-2.0
base_model:
- numiros/Comma-Epsilon-v0.1
---
[4bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl3/tree/4bpw)
[5bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl3/tree/5bpw) |
Achalkamble/codeparrot | Achalkamble | 2025-06-16T06:39:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:36:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/visionOCR-3B-061125 | prithivMLmods | 2025-06-16T06:38:00Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"OCR",
"Receipt",
"VisionOCR",
"Messy Handwriting OCR",
"conversational",
"en",
"zh",
"dataset:linxy/LaTeX_OCR",
"dataset:mychen76/ds_receipts_v2_eval",
"dataset:mychen76/invoices-and-receipts_ocr_v1",
"dataset:prithivMLmods/Latex-KIE",
"arxiv:2412.08746",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"arxiv:2412.02210",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-11T15:09:35Z | ---
license: apache-2.0
language:
- en
- zh
tags:
- text-generation-inference
- OCR
- Receipt
- VisionOCR
- Messy Handwriting OCR
datasets:
- linxy/LaTeX_OCR
- mychen76/ds_receipts_v2_eval
- mychen76/invoices-and-receipts_ocr_v1
- prithivMLmods/Latex-KIE
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
---

# **visionOCR-3B-061125**
> The **visionOCR-3B-061125** model is a fine-tuned version of **Qwen/Qwen2.5-VL-3B-Instruct**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.
# Key Enhancements
* **Advanced Document-Level OCR**: Capable of extracting structured content from complex, multi-page documents such as invoices, academic papers, forms, and scanned reports.
* **Enhanced Long-Context Vision-Language Understanding**: Designed to handle dense document layouts, long sequences of embedded text, tables, and diagrams with coherent cross-reference understanding.
* **State-of-the-Art Performance Across Resolutions**: Achieves competitive results on OCR and visual QA benchmarks such as DocVQA, MathVista, RealWorldQA, and MTVQA.
* **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for content summarization, Q\&A, and multi-modal reasoning.
* **Visually-Grounded Device Interaction**: Enables mobile/robotic device operation via visual inputs and text-based instructions using contextual understanding and decision-making logic.
# Quick Start with Transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/visionOCR-3B-061125", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/visionOCR-3B-061125")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Intended Use
This model is intended for:
* High-fidelity OCR from documents, forms, receipts, and printed or scanned materials.
* Image and document-based question answering for educational and enterprise applications.
* Extraction and LaTeX formatting of mathematical expressions from printed or handwritten content.
* Retrieval and summarization from long documents, slides, and multi-modal inputs.
* Multilingual OCR and structured content extraction for global use cases.
* Robotic or mobile automation with vision-guided contextual interaction.
# Limitations
* May show degraded performance on extremely low-quality or occluded images.
* Not optimized for real-time applications on low-resource or edge devices due to computational demands.
* Variable accuracy on uncommon or low-resource languages/scripts.
* Long video processing may require substantial memory and is not optimized for streaming applications.
* Visual token settings affect performance; suboptimal configurations can impact results.
* In rare cases, outputs may contain hallucinated or contextually misaligned information.
## References
* **DocVLM: Make Your VLM an Efficient Reader**
[https://arxiv.org/pdf/2412.08746v1](https://arxiv.org/pdf/2412.08746v1)
* **YaRN: Efficient Context Window Extension of Large Language Models**
[https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)
* **Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution**
[https://arxiv.org/pdf/2409.12191](https://arxiv.org/pdf/2409.12191)
* **Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond**
[https://arxiv.org/pdf/2308.12966](https://arxiv.org/pdf/2308.12966)
* **A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy**
[https://arxiv.org/pdf/2412.02210](https://arxiv.org/pdf/2412.02210) |
dgambettaphd/M_llm2_run2_gen10_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | 2025-06-16T06:37:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:37:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cucucu666/huanhu-6.16 | cucucu666 | 2025-06-16T06:37:22Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T03:41:06Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labii face, Crayon Shin-chan style, cheerful expression, big smile,
open mouth, plain color background
widget:
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_0.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_1.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_2.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/huanhu-6.16
<Gallery />
## Model description
These are cucucu666/huanhu-6.16 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth, plain color background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/huanhu-6.16/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/huanhu-6.16', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth, plain color background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
John6666/magnumspell-v10-sdxl | John6666 | 2025-06-16T06:36:50Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"mature",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:31:09Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- mature
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1685170/magnumspell?modelVersionId=1907273).
This model created by [Dark_Schneider](https://civitai.com/user/Dark_Schneider).
|
Sawu-Low3/t5-base-lora-stage1 | Sawu-Low3 | 2025-06-16T06:36:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T05:08:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ka-ops/Meta-Llama-3.1-8B-Instruct-FP8 | ka-ops | 2025-06-16T06:35:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"fp8",
"vllm",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"compressed-tensors",
"region:us"
] | text-generation | 2025-06-16T06:24:00Z | ---
tags:
- fp8
- vllm
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
---
# Meta-Llama-3.1-8B-Instruct-FP8
## Model Overview
- **Model Architecture:** Meta-Llama-3.1
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- **Release Date:** 7/23/2024
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
- **Model Developers:** Neural Magic
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It achieves an average score of 73.44 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.79.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) to FP8 data type, ready for inference with vLLM built from source.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
[LLM Compressor](https://github.com/vllm-project/llm-compressor) is used for quantization with 512 sequences of UltraChat.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created by applying [LLM Compressor with calibration samples from UltraChat](https://github.com/vllm-project/llm-compressor/blob/sa/big_model_support/examples/big_model_offloading/big_model_w8a8_calibrate.py), as presented in the code snipet below.
```python
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
from llmcompressor.transformers.compression.helpers import (
calculate_offload_device_map,
custom_offload_device_map,
)
recipe = """
quant_stage:
quant_modifiers:
QuantizationModifier:
ignore: ["lm_head"]
config_groups:
group_0:
weights:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
input_activations:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
targets: ["Linear"]
"""
model_stub = "meta-llama/Meta-Llama-3.1-8B-Instruct"
model_name = model_stub.split("/")[-1]
device_map = calculate_offload_device_map(
model_stub, reserve_for_hessians=False, num_gpus=1, torch_dtype="auto"
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_stub, torch_dtype="auto", device_map=device_map
)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
output_dir = f"./{model_name}-FP8"
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 4096
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
oneshot(
model=model,
output_dir=output_dir,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
save_compressed=True,
)
```
## Evaluation
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct </strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct-FP8(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>67.95
</td>
<td>67.97
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>MMLU-cot (0-shot)
</td>
<td>71.24
</td>
<td>71.12
</td>
<td>99.83%
</td>
</tr>
<tr>
<td>ARC Challenge (0-shot)
</td>
<td>82.00
</td>
<td>81.66
</td>
<td>99.59%
</td>
</tr>
<tr>
<td>GSM-8K-cot (8-shot, strict-match)
</td>
<td>81.96
</td>
<td>81.12
</td>
<td>98.98%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>80.46
</td>
<td>80.4
</td>
<td>99.93%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>78.45
</td>
<td>77.90
</td>
<td>99.30%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>54.50
</td>
<td>53.92
</td>
<td>98.94%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>73.79</strong>
</td>
<td><strong>73.44</strong>
</td>
<td><strong>99.52%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following commands:
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU-cot
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### ARC-Challenge
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### GSM-8K
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--apply_chat_template \
--fewshot_as_multiturn \
--num_fewshot 8 \
--batch_size auto
```
#### Hellaswag
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
```
#### Winogrande
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
```
#### TruthfulQA
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
``` |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.15_epoch2 | MinaMila | 2025-06-16T06:33:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:31:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/kodorail-v21-sdxl | John6666 | 2025-06-16T06:31:07Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"Japanese",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:25:00Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- Japanese
- merge
- noobai
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v1.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1423866/kodorail?modelVersionId=1907872).
This model created by [Kodora](https://civitai.com/user/Kodora).
|
prithivMLmods/Ross-640-BMath-1.5B-GGUF | prithivMLmods | 2025-06-16T06:30:52Z | 215 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"math",
"text-generation",
"en",
"base_model:prithivMLmods/Ross-640-BMath-1.5B",
"base_model:quantized:prithivMLmods/Ross-640-BMath-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T12:19:02Z | ---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Ross-640-BMath-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
---
# **Ross-640-BMath-1.5B-GGUF**
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Ross-640-BMath-1.5B.F32.gguf | 6.18 GB | F32 | Full precision 32-bit floating point |
| Ross-640-BMath-1.5B.F16.gguf | 3.09 GB | F16 | Half precision 16-bit floating point |
| Ross-640-BMath-1.5B.BF16.gguf | 3.09 GB | BF16 | Brain floating point 16-bit |
| Ross-640-BMath-1.5B.Q8_0.gguf | 1.65 GB | Q8_0 | 8-bit quantized |
| Ross-640-BMath-1.5B.Q6_K.gguf | 1.27 GB | Q6_K | 6-bit quantized |
| Ross-640-BMath-1.5B.Q5_K_M.gguf | 1.13 GB | Q5_K_M | 5-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q5_K_S.gguf | 1.1 GB | Q5_K_S | 5-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q4_K_M.gguf | 986 MB | Q4_K_M | 4-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q4_K_S.gguf | 940 MB | Q4_K_S | 4-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q3_K_L.gguf | 880 MB | Q3_K_L | 3-bit quantized, large quality |
| Ross-640-BMath-1.5B.Q3_K_M.gguf | 824 MB | Q3_K_M | 3-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q3_K_S.gguf | 761 MB | Q3_K_S | 3-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q2_K.gguf | 676 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824 | himedia | 2025-06-16T06:30:44Z | 0 | 0 | null | [
"safetensors",
"financial",
"credit-rating",
"korean",
"gemma",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:30:36Z | ---
language: ko
license: apache-2.0
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- financial
- credit-rating
- korean
- gemma
- unsloth
- fine-tuned
model_name: FinCreditLlama-3.2-3B
pipeline_tag: text-generation
---
# FinCreditLlama-3.2-3B
## 모델 개요
FinCreditLlama-3.2-3B는 금융 신용 평가를 위해 특별히 설계된 한국어 언어 모델입니다.
**베이스 모델**: unsloth/Llama-3.2-3B-Instruct
**데이터셋**: himedia/financial_dummy_data_v2
**학습 방법**: LoRA (Low-Rank Adaptation)
**학습 일시**: 20250616_062824
## 하이퍼파라미터
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## 사용 방법
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 모델과 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824")
# 간단한 추론 예제
prompt = "고객의 신용등급을 평가해주세요:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 레포지토리명 구성
```
fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824
```
- `fincredit-gemma3-4b`: 모델 기본명
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: 학습 스텝
- `20250616_062824`: 학습 시각
## 성능
이 모델은 한국어 금융 텍스트에 대해 파인튜닝되어 신용 평가 관련 질의응답에 특화되어 있습니다.
## 라이선스
Apache 2.0
|
yukinoshitawebid/shortlink | yukinoshitawebid | 2025-06-16T06:29:39Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-06-09T10:15:34Z | ---
license: mit
---
https://short.sampangkab.go.id/Home
https://uinkhas.id/index.php?q=vhxwni
https://whyvpn.my.id/r?code=SL2506090001
https://v-online.id/f0f94920
https://doodster001.web.id/e/?id=iNSYacMWq
https://anichin.kee.my.id/
https://ddoodd.biz.id/f314b7
https://tlkm.id/bhtEqikdszUNzsp
https://2ur.jp/bjP1
http://www.teknofull.com.tr/#s=cwJ8KjseGBJ9KjskmLO0bLF4Gw17v7bycETomR5tb8lavZCyv8TkGfghGqKrFjYtFVWtW819bZegGENrGRXgb7OsQEOaWw0eFVMumROgGRQtvZ2ubLJtQfgiGfl0nRrkxP%3D%3D
https://fundn.eu/jitbl |
irqol123/m | irqol123 | 2025-06-16T06:27:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T06:27:55Z | ---
license: apache-2.0
---
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.15_epoch1 | MinaMila | 2025-06-16T06:27:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:25:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/Procyon-1.5B-Theorem-GGUF | prithivMLmods | 2025-06-16T06:23:46Z | 217 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"theorem",
"text-generation",
"en",
"base_model:prithivMLmods/Procyon-1.5B-Qwen2-Theorem",
"base_model:quantized:prithivMLmods/Procyon-1.5B-Qwen2-Theorem",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T09:19:59Z | ---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Procyon-1.5B-Qwen2-Theorem
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- theorem
---
# **Procyon-1.5B-Qwen2-Theorem-GGUF**
> **Procyon-1.5B-Qwen2-Theorem** is an **experimental theorem explanation model** fine-tuned on **Qwen2-1.5B**. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Procyon-1.5B-Qwen2-Theorem.F32.gguf | 7.11 GB | F32 | Full precision 32-bit floating point |
| Procyon-1.5B-Qwen2-Theorem.F16.gguf | 3.56 GB | F16 | Half precision 16-bit floating point |
| Procyon-1.5B-Qwen2-Theorem.BF16.gguf | 3.56 GB | BF16 | Brain floating point 16-bit |
| Procyon-1.5B-Qwen2-Theorem.Q8_0.gguf | 1.89 GB | Q8_0 | 8-bit quantized |
| Procyon-1.5B-Qwen2-Theorem.Q6_K.gguf | 1.46 GB | Q6_K | 6-bit quantized |
| Procyon-1.5B-Qwen2-Theorem.Q5_K_M.gguf | 1.29 GB | Q5_K_M | 5-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q5_K_S.gguf | 1.26 GB | Q5_K_S | 5-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q4_K_M.gguf | 1.12 GB | Q4_K_M | 4-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q4_K_S.gguf | 1.07 GB | Q4_K_S | 4-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_L.gguf | 980 MB | Q3_K_L | 3-bit quantized, large quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_M.gguf | 924 MB | Q3_K_M | 3-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_S.gguf | 861 MB | Q3_K_S | 3-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q2_K.gguf | 753 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
mlx-community/Jan-nano-8bit | mlx-community | 2025-06-16T06:23:32Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Menlo/Jan-nano",
"base_model:quantized:Menlo/Jan-nano",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-06-16T06:19:04Z | ---
license: apache-2.0
base_model: Menlo/Jan-nano
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# mlx-community/Jan-nano-8bit
This model [mlx-community/Jan-nano-8bit](https://huggingface.co/mlx-community/Jan-nano-8bit) was
converted to MLX format from [Menlo/Jan-nano](https://huggingface.co/Menlo/Jan-nano)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Jan-nano-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/llm-jp-3.1-13b-instruct4 | mlx-community | 2025-06-16T06:21:21Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"base_model:llm-jp/llm-jp-3.1-13b-instruct4",
"base_model:finetune:llm-jp/llm-jp-3.1-13b-instruct4",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T05:37:34Z | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: mlx
inference: false
base_model: llm-jp/llm-jp-3.1-13b-instruct4
tags:
- mlx
---
# mlx-community/llm-jp-3.1-13b-instruct4
This model [mlx-community/llm-jp-3.1-13b-instruct4](https://huggingface.co/mlx-community/llm-jp-3.1-13b-instruct4) was
converted to MLX format from [llm-jp/llm-jp-3.1-13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-13b-instruct4)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-3.1-13b-instruct4")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
prithivMLmods/Procyon-1.5B-Qwen2-Theorem | prithivMLmods | 2025-06-16T06:20:18Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"math",
"theorem",
"SFT",
"trl",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T09:16:57Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- theorem
- SFT
- trl
---

# **Procyon-1.5B-Qwen2-Theorem**
> **Procyon-1.5B-Qwen2-Theorem** is an **experimental theorem explanation model** fine-tuned on **Qwen2-1.5B**. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Procyon-1.5B-Qwen2-Theorem-GGUF](https://huggingface.co/prithivMLmods/Procyon-1.5B-Qwen2-Theorem-GGUF)
---
## **Key Features**
1. **Mathematical Theorem Explanation**
Designed to deliver structured, formal, and accessible explanations of theorems across pure and applied mathematics, including areas such as algebra, calculus, topology, and number theory.
2. **Concept Breakdown without Deep Reasoning**
Focuses on **clarity over inference**, offering **non-reasoning-based breakdowns** suitable for educational tools, step-by-step formal writing, and documentation-heavy workflows.
3. **Concise and Interpretable Output**
Outputs content that aligns with pedagogical clarity: definitions, hypotheses, conclusions, and related implications—all in clean, human-readable structure.
4. **Multi-Format Support**
Capable of generating content in formats such as **LaTeX**, **Markdown**, **JSON (structured concept trees)**, and plain text, suitable for academic publishing and automated knowledge bases.
5. **Lightweight and Efficient**
With a **1.5B parameter footprint**, it is ideal for deployment on **edge devices**, **local academic tools**, and **integrated learning platforms**, offering quick responses without heavy compute demands.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Procyon-1.5B-Qwen2-Theorem"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the Fundamental Theorem of Calculus in simple terms with hypotheses and conclusion."
messages = [
{"role": "system", "content": "You are an assistant skilled at explaining mathematical theorems in a structured and simple format."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Theorem explanation and educational enrichment
* Math-aware structured content generation
* LaTeX and Markdown generation for academic writing
* Technical teaching tools and tutoring support
* Early-stage research on symbolic language learning
---
## **Limitations**
* **Not designed for deep reasoning or proof synthesis**
* May underperform in conversational, general-purpose tasks
* Best suited for deterministic, formulaic, and structured outputs
* Performance on non-mathematical or abstract logical tasks may be limited |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.25_epoch2 | MinaMila | 2025-06-16T06:20:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:18:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF | Triangle104 | 2025-06-16T06:18:44Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T06:16:48Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as “Venice Uncensored,” the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_m.gguf -c 2048
```
|
New-tutorial-parveen-viral-vodeo/FULL.VIDEO.parveen.Viral.Video.Tutorials.Official | New-tutorial-parveen-viral-vodeo | 2025-06-16T06:17:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:10:43Z | <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
IoanaLiviaPopescu/real-data-synth-data-1600-1-Wavenet-B-whisper-small | IoanaLiviaPopescu | 2025-06-16T06:15:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T04:56:07Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1600-1-Wavenet-B-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B
type: IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 16.79881984141619
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1600-1-Wavenet-B-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLiviaPopescu/RealVoiceSynthVoice-1600-1-Wavenet-B dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3742
- Wer: 16.7988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2466 | 1.0 | 63 | 0.3919 | 17.3336 |
| 0.0899 | 2.0 | 126 | 0.3717 | 16.8726 |
| 0.0465 | 3.0 | 189 | 0.3742 | 16.7988 |
| 0.0265 | 4.0 | 252 | 0.3877 | 17.2598 |
| 0.0187 | 5.0 | 315 | 0.4030 | 17.5180 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
VIDEO-Parveen-Viral-Video/VIRAL.CLIP.Parveen.Viral.Video.Tutorial.Official | VIDEO-Parveen-Viral-Video | 2025-06-16T06:14:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:14:19Z | <a href="https://t.co/dTvnXACQMR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
nirma-meena-viral-video-original/Nirma.Meena.Viral.Video.Original.Link | nirma-meena-viral-video-original | 2025-06-16T06:14:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T06:09:29Z | ---
license: apache-2.0
---
[](https://tinyurl.com/38v3p999)
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.25_epoch1 | MinaMila | 2025-06-16T06:13:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:11:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.25_epoch1 | MinaMila | 2025-06-16T06:12:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:10:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nimra-Mehra-Viral-Video123/NEW.VIDEO.Nirma.Meena.Viral.Video.Link.FULL.HD | Nimra-Mehra-Viral-Video123 | 2025-06-16T06:11:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:10:33Z | [](https://tinyurl.com/4va3nzzc) |
Mezzo-Fun-Official-Viral-Video/Full.VIDEO.Mizo.Fun.Viral.Video.Tutorial.Official | Mezzo-Fun-Official-Viral-Video | 2025-06-16T06:11:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T06:11:15Z | <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
enpeizhao/qwen2_5-3b-instruct-trl-sft-all-in-one-8_adapter | enpeizhao | 2025-06-16T06:10:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:10:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enpeizhao/qwen2_5-3b-instruct-trl-sft-all-in-one-8 | enpeizhao | 2025-06-16T06:10:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T05:01:33Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2_5-3b-instruct-trl-sft-all-in-one-8
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2_5-3b-instruct-trl-sft-all-in-one-8
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="enpeizhao/qwen2_5-3b-instruct-trl-sft-all-in-one-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/my-pred-team/enpeizhao_qwen2_5-3b-instruct-trl-sft-all-in-one-8/runs/xx787ryb)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.53.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd | BootesVoid | 2025-06-16T06:09:54Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T06:09:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMILY
---
# Cmbyjnk1403Xvrdqsg2Kyovgu_Cmbykq3Fx03Zcrdqse4Makkvd
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMILY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMILY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd', weight_name='lora.safetensors')
image = pipeline('EMILY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbykq3fx03zcrdqse4makkvd/discussions) to add images that show off what you’ve made with this LoRA.
|
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_S-GGUF | Triangle104 | 2025-06-16T06:08:49Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T06:06:40Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_S-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as “Venice Uncensored,” the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q5_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q5_k_s.gguf -c 2048
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.5_epoch2 | MinaMila | 2025-06-16T06:06:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:04:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liddd3/ppo-LunarLander-v2 | liddd3 | 2025-06-16T06:05:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T06:04:40Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.10 +/- 20.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
K10S/mistral-student-finetune | K10S | 2025-06-16T06:02:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2025-06-16T06:01:58Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
parveen-bilasipara-viral-vid/VIDEOs.18k.parveen.viral.video.link.on.social.media | parveen-bilasipara-viral-vid | 2025-06-16T06:00:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:45:10Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
parveen-bilasipara-viral-video/Original.18.parveen.viral.video.on.social.media | parveen-bilasipara-viral-video | 2025-06-16T06:00:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:44:46Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.5_epoch1 | MinaMila | 2025-06-16T05:59:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:57:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF | Triangle104 | 2025-06-16T05:58:32Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T05:54:23Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as “Venice Uncensored,” the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -c 2048
```
|
01PrathamS/text2sql_finetune | 01PrathamS | 2025-06-16T05:56:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T05:56:35Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: text2sql_finetune
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for text2sql_finetune
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="01PrathamS/text2sql_finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Nirma-Meena-Full-Video/Full-Viral.Nirma.Nirma.Meena.Viral.Video.lady | Nirma-Meena-Full-Video | 2025-06-16T05:55:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:55:04Z | ---
license: apache-2.0
---
[](https://bit.ly/4lb0YGM)
|
Khushi-Rao/VIDEO.mezzofun.Khushi.Rao.Viral.Video.Tutorial.Official | Khushi-Rao | 2025-06-16T05:54:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:53:44Z | Khushi Rao Viral video took the internet viewers on various Leaked social media platforms. Khushi Rao Video, a young and talented digital creator, recently became famous thanks to this interesting video.
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
VIDEO-Parveen-viral-video-Clip/EXCLUSIVE.Shakila.Parvin.Viral.Video.Original.Link | VIDEO-Parveen-viral-video-Clip | 2025-06-16T05:54:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:53:48Z | <a href="https://t.co/dTvnXACQMR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.75_epoch2 | MinaMila | 2025-06-16T05:52:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:51:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Renugadevi82/cisco-nx-ai-4bit | Renugadevi82 | 2025-06-16T05:51:37Z | 0 | 0 | null | [
"safetensors",
"llama",
"cisco",
"networking",
"tinyllama",
"4bit",
"quantized",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T13:37:45Z | ---
tags:
- cisco
- networking
- llama
- tinyllama
- 4bit
- quantized
license: apache-2.0
language: en
---
# Cisco Network Configuration Model (4-bit Quantized)
## Usage with 4-bit Quantization
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained(
"Renugadevi82/cisco-nx-ai-4bit",
quantization_config=bnb_config,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Renugadevi82/cisco-nx-ai-4bit")
```
## Memory Requirements
- 4-bit: ~0.8GB VRAM
- 16-bit: ~2.5GB VRAM
|
Renugadevi82/cisco-nx-ai-16bit | Renugadevi82 | 2025-06-16T05:49:53Z | 0 | 0 | null | [
"safetensors",
"llama",
"cisco",
"networking",
"tinyllama",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T13:36:34Z | ---
tags:
- cisco
- networking
- llama
- tinyllama
license: apache-2.0
language: en
---
# Cisco Network Configuration Model (16-bit)
Fine-tuned TinyLlama model for Cisco network configuration tasks.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Renugadevi82/cisco-nx-ai-16bit")
tokenizer = AutoTokenizer.from_pretrained("Renugadevi82/cisco-nx-ai-16bit")
prompt = "Configure VLAN 100 with name Management"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
j1a0m0e7s/LID_DryPond | j1a0m0e7s | 2025-06-16T05:49:15Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-08T15:19:11Z | ---
license: apache-2.0
---
|
Pakistani-Young-Couple-Viral-Video/VIDEO.Pakistani.Young.Couple.Viral.Video.Tutorial.Official | Pakistani-Young-Couple-Viral-Video | 2025-06-16T05:48:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:44:27Z | <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
sergioalves/26f3c459-1dc9-4d0d-b907-7258ee195a89 | sergioalves | 2025-06-16T05:48:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-16T05:23:29Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26f3c459-1dc9-4d0d-b907-7258ee195a89
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 374958181cb5f0a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.8
group_by_length: false
hub_model_id: sergioalves/26f3c459-1dc9-4d0d-b907-7258ee195a89
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/374958181cb5f0a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3f11e093-22a6-4174-9a7a-02e2857fdaec
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 3f11e093-22a6-4174-9a7a-02e2857fdaec
warmup_steps: 30
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 26f3c459-1dc9-4d0d-b907-7258ee195a89
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8987 | 0.0004 | 1 | 1.8671 |
| 1.6803 | 0.0561 | 150 | 1.8663 |
| 1.6321 | 0.1123 | 300 | 1.8659 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hhua2/finecaption | hhua2 | 2025-06-16T05:48:23Z | 0 | 0 | null | [
"safetensors",
"en",
"dataset:hhua2/CompositionCap",
"arxiv:2411.15411",
"license:apache-2.0",
"region:us"
] | null | 2024-11-27T20:29:28Z | ---
license: apache-2.0
language:
- en
datasets:
- hhua2/CompositionCap
---
This repository contains the data of the paper [FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity](https://huggingface.co/papers/2411.15411). |
CrucibleLab-TG/L3.3-NS-Dark-Ages-70b-v0.1 | CrucibleLab-TG | 2025-06-16T05:48:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1",
"base_model:merge:CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1",
"base_model:CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1",
"base_model:merge:CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:39:36Z | ---
base_model:
- CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1
- CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the NuSLERP merge method.
### Models Merged
The following models were included in the merge:
* [CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1](https://huggingface.co/CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1)
* [CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1](https://huggingface.co/CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1
parameters:
weight: 0.5
- model: CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1
parameters:
weight: 0.5
merge_method: nuslerp
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: CrucibleLab-TG/L3.3-Negative-RP-70b-v0.1
```
|
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_S-GGUF | Triangle104 | 2025-06-16T05:47:44Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T05:36:29Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_S-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as “Venice Uncensored,” the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q4_K_S-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_s.gguf -c 2048
```
|
tyz-own/dummy-model | tyz-own | 2025-06-16T05:47:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-16T05:47:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IoanaLiviaPopescu/real-data-synth-data-1600-1-Emil-Neural-whisper-small | IoanaLiviaPopescu | 2025-06-16T05:45:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/RealVoiceSynthVoice-1600-1-Emil-Neural",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T04:25:40Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/RealVoiceSynthVoice-1600-1-Emil-Neural
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1600-1-Emil-Neural-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/RealVoiceSynthVoice-1600-1-Emil-Neural
type: IoanaLivia/RealVoiceSynthVoice-1600-1-Emil-Neural
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 15.637101235478518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1600-1-Emil-Neural-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/RealVoiceSynthVoice-1600-1-Emil-Neural dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3638
- Wer: 15.6371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2603 | 1.0 | 63 | 0.3926 | 16.8173 |
| 0.0931 | 2.0 | 126 | 0.3638 | 15.6371 |
| 0.0472 | 3.0 | 189 | 0.3668 | 16.6697 |
| 0.0268 | 4.0 | 252 | 0.3798 | 16.2087 |
| 0.0187 | 5.0 | 315 | 0.3943 | 16.1165 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
s-emanuilov/Tucan-27B-v1.0 | s-emanuilov | 2025-06-16T05:45:08Z | 54 | 1 | null | [
"safetensors",
"gemma2",
"function_calling",
"MCP",
"tool_use",
"bg",
"arxiv:2503.23278",
"arxiv:2408.00118",
"arxiv:2412.10893",
"base_model:INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0",
"base_model:finetune:INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0",
"license:gemma",
"region:us"
] | null | 2025-06-08T08:59:21Z | ---
license: gemma
language:
- bg
base_model:
- INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0
tags:
- function_calling
- MCP
- tool_use
---
# Tucan-27B-v1.0
## Bulgarian Language Models for Function Calling 🇧🇬
> 📄 **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
## Overview 🚀
TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
Built on top of [BgGPT models](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe) from [INSAIT Institute](https://insait.ai/), which were themselves built on [Gemma 2](https://arxiv.org/pdf/2408.00118), Tucan models have been enhanced with function-calling capabilities.
## Motivation 🎯
Although BgGPT models demonstrate [strong Bulgarian language comprehension](https://arxiv.org/pdf/2412.10893), they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.
This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.
## Models and variants 📦
Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:
<div align="center">
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|------------|------------|--------------|------------------|
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) |
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) 📍| [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
📍 *Current model/repo*
</div>
Models and quantizations are also available for easy use in Ollama: https://ollama.com/s_emanuilov/tucan
## Benchmarks 📊
All evaluations were performed using the [Tucan evaluation framework](https://github.com/s-emanuilov/tucan), with results averaged across multiple runs. Tucan models demonstrate superior function-calling capabilities compared to their BgGPT counterparts, with particularly strong improvements in smaller model sizes. To ensure no catastrophic forgetting occurred, we evaluated knowledge retention using [EleutherAI's lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on Bulgarian benchmarks, confirming that each Tucan model maintains performance on par with its BgGPT equivalent.
<div align="center">
| Model | Function Calling | HellaswagBG | WinograndeBG | ARC-Easy-BG | ARC-Challenge-BG |
|-------|-----------------|-------------|--------------|-------------|------------------|
| **Tucan-2.6B-v1.0** 🔥 | **0.7875** | 0.5924 | 0.6456 | 0.5657 | 0.3754 |
| **Tucan-9B-v1.0** 🔥 | **0.8667** | 0.7046 | 0.7151 | 0.7024 | 0.5188 |
| **Tucan-27B-v1.0** 🔥 | **0.875** | 0.6179 | 0.6275 | 0.6486 | 0.442 |
| BgGPT-Gemma-2-2.6B-IT-v1.0 | 0.5874 | 0.6306 | 0.5821 | 0.5657 | 0.372 |
| BgGPT-Gemma-2-9B-IT-v1.0 | 0.7833 | 0.7057 | 0.719 | 0.7231 | 0.5188 |
| BgGPT-Gemma-2-27B-IT-v1.0 | 0.8667 | 0.62 | 0.6212 | 0.6587 | 0.459 |
*Note: 27B models were evaluated in 8-bit precision for comparison purposes.*
</div>
## Usage 🛠️
### Quick start ⚡
```bash
pip install -U "transformers[torch]" accelerate bitsandbytes
```
### Prompt format ⚙️
**Critical:** Use this format for function calling for the best results.
<details>
<summary><strong>📋 Required system prompt template</strong></summary>
```
<bos><start_of_turn>user
Ти си полезен AI асистент, който предоставя полезни и точни отговори.
Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо.
Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
## Шаблон за извикване:
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```
## Налични функции:
[your function definitions here]
## Потребителска заявка:
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model
```
</details>
### Note 📝
**The model only generates the `tool_call` blocks with function names and parameters - it doesn't actually execute the functions.** Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in `tool_response` blocks for the conversation to continue the interperation of the results. A full demo is comming soon.
### Python example 🐍
<details>
<summary><strong>💻 Complete Working Example</strong></summary>
```python
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="eager" # Required for Gemma models
)
# Create prompt with system template
def create_prompt(functions, user_query):
system_prompt = """Ти си полезен AI асистент, който предоставя полезни и точни отговори.
Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо.
Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
## Шаблон за извикване:
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
full_prompt = f"{system_prompt}\n## Налични функции:\n{functions_text}\n\n## Потребителска заявка:\n{user_query}"
chat = [{"role": "user", "content": full_prompt}]
return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# Example usage
functions = [{
"name": "create_calendar_event",
"description": "Creates a new event in Google Calendar.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"date": {"type": "string"},
"start_time": {"type": "string"},
"end_time": {"type": "string"}
},
"required": ["title", "date", "start_time", "end_time"]
}
}]
query = "Създай събитие 'Годишен преглед' за 8-ми юни 2025 от 14:00 до 14:30."
# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=2048,
temperature=0.1,
top_k=25,
top_p=1.0,
repetition_penalty=1.1,
do_sample=True,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
pad_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)
```
</details>
## Performance & Dataset 📊
> 📄 **Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper**
**Dataset:** 10,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).
**Results:** Significant improvements in tool-use capabilities over base BgGPT models: 34.1% for 2.6B, 10.6% for 9B, and 1.0% for 27B models in [internal benchmarks](https://github.com/s-emanuilov/tucan). Beyond raw function-calling scores, all Tucan models demonstrate more natural conversational flow while maintaining tool-use capabilities, retaining their base knowledge.
## Acknowledgments 🙏
Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe).
## Questions & Contact 💬
For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.linkedin.com/in/simeon-emanuilov/)** |
TheDrummer/Agatha-111B-v1 | TheDrummer | 2025-06-16T05:44:30Z | 68 | 12 | null | [
"safetensors",
"cohere2",
"base_model:CohereLabs/c4ai-command-a-03-2025",
"base_model:finetune:CohereLabs/c4ai-command-a-03-2025",
"region:us"
] | null | 2025-06-12T07:38:51Z | ---
base_model:
- CohereLabs/c4ai-command-a-03-2025
---
# Join our Discord! https://discord.gg/BeaverAI
## More than 6000 members of helpful, LLM enthusiasts! A hub for players and makers alike!
### We need testers!
---
Drummer proudly presents...
# Agatha 111B v1

## Special Thanks
- Thank you Geechan for unblocking model development for Command A and taking the lead!
- Thank you to the testers at BeaverAI! You da MVP!
- Thank you to each and everyone who donated and subscribed in [Patreon](https://www.patreon.com/TheDrummer) and [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- [Subscribe to my Patreon!](https://www.patreon.com/TheDrummer)
## Usage
- Command R / Command A / Cohere Template
## Links
- Original: https://huggingface.co/TheDrummer/Agatha-111B-v1
- GGUF: https://huggingface.co/TheDrummer/Agatha-111B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Agatha-111B-v1-GGUF
`config-v1h` |
enoubi/XLM-RoBERTa-Twitter-Indonesian-Sarcastic-Few-Shot | enoubi | 2025-06-16T05:43:16Z | 250 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-11T04:37:59Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: XLM-RoBERTa-Twitter-Indonesian-Sarcastic-Few-Shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-Twitter-Indonesian-Sarcastic-Few-Shot
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3513
- Accuracy: 0.8717
- F1: 0.7677
- Precision: 0.6994
- Recall: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5833 | 1.0 | 31 | 0.5356 | 0.75 | 0.0 | 0.0 | 0.0 |
| 0.526 | 2.0 | 62 | 0.4851 | 0.75 | 0.0 | 0.0 | 0.0 |
| 0.4795 | 3.0 | 93 | 0.4745 | 0.7724 | 0.1644 | 1.0 | 0.0896 |
| 0.3989 | 4.0 | 124 | 0.3300 | 0.8657 | 0.6667 | 0.8780 | 0.5373 |
| 0.2827 | 5.0 | 155 | 0.3112 | 0.8657 | 0.7391 | 0.7183 | 0.7612 |
| 0.2006 | 6.0 | 186 | 0.2641 | 0.8955 | 0.7705 | 0.8545 | 0.7015 |
| 0.1357 | 7.0 | 217 | 0.3315 | 0.8881 | 0.7917 | 0.7403 | 0.8507 |
| 0.1251 | 8.0 | 248 | 0.4118 | 0.8433 | 0.7308 | 0.6404 | 0.8507 |
| 0.0643 | 9.0 | 279 | 0.4539 | 0.8918 | 0.7642 | 0.8393 | 0.7015 |
| 0.046 | 10.0 | 310 | 0.5066 | 0.8694 | 0.7518 | 0.7162 | 0.7910 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
Nirma-Meena-Full-Video/TRENDING_Top.Nirma.Meena.Official.Viral.Video | Nirma-Meena-Full-Video | 2025-06-16T05:41:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:41:38Z | [](https://bit.ly/4lb0YGM)
|
parveen-ka-viral-video/Original.18.parveen.viral.video.bilasipara.new.video.parbin.bilasipara.viral.video.link | parveen-ka-viral-video | 2025-06-16T05:40:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:36:14Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
ToastyPigeon/a-glm-train-mid-backup | ToastyPigeon | 2025-06-16T05:40:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"glm4",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:adapter:THUDM/GLM-4-32B-0414",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-16T05:39:11Z | ---
base_model: THUDM/GLM-4-32B-0414
library_name: peft
---
40% Epoch checkpoint (~40M tokens seen). Producing some interesting output but inconsistent, potential target for stabilizing RL. Saving this in case it gets worse later. |
VIDEO-mezzo-fun-viral-video-Clip-Original/mezzo.fun.viral.video.Link.viral.On.Social.Media | VIDEO-mezzo-fun-viral-video-Clip-Original | 2025-06-16T05:39:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:39:21Z | <a href="https://t.co/dTvnXACQMR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.05_epoch2 | MinaMila | 2025-06-16T05:39:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:37:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IoanaLiviaPopescu/real-data-synth-data-1200-1-Emil-Neural-whisper-small | IoanaLiviaPopescu | 2025-06-16T05:38:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/RealVoiceSynthVoice-1200-1-Emil-Neural",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T04:31:44Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/RealVoiceSynthVoice-1200-1-Emil-Neural
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1200-1-Emil-Neural-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/RealVoiceSynthVoice-1200-1-Emil-Neural
type: IoanaLivia/RealVoiceSynthVoice-1200-1-Emil-Neural
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 16.43002028397566
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1200-1-Emil-Neural-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/RealVoiceSynthVoice-1200-1-Emil-Neural dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3690
- Wer: 16.4300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2883 | 1.0 | 51 | 0.4003 | 17.5733 |
| 0.1092 | 2.0 | 102 | 0.3651 | 17.0570 |
| 0.0568 | 3.0 | 153 | 0.3690 | 16.4300 |
| 0.0331 | 4.0 | 204 | 0.3852 | 16.6513 |
| 0.0233 | 5.0 | 255 | 0.3967 | 17.0754 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Ayesha-Khan-Official-Viral-Video/FULL.VIDEO.Ayesha.Khan.Viral.Video.Tutorial.Official | Ayesha-Khan-Official-Viral-Video | 2025-06-16T05:36:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:35:35Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
viral-othoi-113-video-link/TRENDING.Top.Othoi.Official.Viral.Video | viral-othoi-113-video-link | 2025-06-16T05:35:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T05:35:22Z | 01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Subsets and Splits