modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
volfan6415/nospeed3 | volfan6415 | 2025-06-19T22:12:13Z | 0 | 0 | null | [
"safetensors",
"llama",
"finetuned",
"region:us"
] | null | 2025-06-19T22:11:00Z | ---
tags:
- finetuned
---
# meta-llama/Llama-3.2-1B-Instruct-finetuned with Atomic
## Model Description
This model was fine-tuned from `meta-llama/Llama-3.2-1B-Instruct` on the `fka/awesome-chatgpt-prompts` dataset utilizing the Atomic System from NOLA, AI.
## Training Data
- **Dataset name:** fka/awesome-chatgpt-prompts
## Training Arguments
- **Batch size:** 48
- **Learning rate:** 0.0001
- **Used ATOMIC Speed: ** True
## Evaluation Results
|
mlx-community/Skywork-SWE-32B-bf16 | mlx-community | 2025-06-19T22:08:34Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"swe-bench",
"text-generation",
"conversational",
"base_model:Skywork/Skywork-SWE-32B",
"base_model:finetune:Skywork/Skywork-SWE-32B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-19T22:07:48Z | ---
tags:
- swe-bench
- mlx
license: apache-2.0
metrics:
- pass@1
library_name: mlx
pipeline_tag: text-generation
base_model: Skywork/Skywork-SWE-32B
---
# mlx-community/Skywork-SWE-32B-bf16
This model [mlx-community/Skywork-SWE-32B-bf16](https://huggingface.co/mlx-community/Skywork-SWE-32B-bf16) was
converted to MLX format from [Skywork/Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Skywork-SWE-32B-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rsicproject/BERT-UCM | rsicproject | 2025-06-19T22:06:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-19T22:06:21Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BERT-UCM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-UCM
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
kingardor/llama3.1-8B-instruct-29reports-lora128-slim | kingardor | 2025-06-19T22:05:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T22:03:07Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rsicproject/BERT-SYDNEY | rsicproject | 2025-06-19T22:01:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-19T22:01:05Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BERT-SYDNEY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-SYDNEY
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
katanemo/Arch-Function-Chat-1.5B | katanemo | 2025-06-19T21:56:19Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-28T19:58:12Z | ---
license: other
license_name: katanemo-research
license_link: >-
https://huggingface.co/katanemo/Arch-Function-Chat-1.5B/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-Coder-1.5B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# katanemo/Arch-Function-Chat-1.5B
## Overview
The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
- **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
- **Interpret & respond**: Provides human-friendly responses based on function execution results
- **Context management**: Mantains context in complex multi-turn interactions
*Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the
project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md).
# Requirements
The code of Arch-Function-Chat-1.5B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```
# How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling).
### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Function-Chat-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_PROMPT = (
"You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
"\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
"\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)
FORMAT_PROMPT = (
"\n\nBased on your analysis, provide your response in one of the following JSON formats:"
'\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
'\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
'\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)
# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "str",
"description": "The city and state, e.g. San Francisco, New York",
},
"unit": {
"type": "str",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return",
},
},
"required": ["location"],
},
},
}
]
# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
tools = "\n".join(
[json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
)
return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
system_prompt = format_prompt(tools)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is the weather in Seattle?"},
]
model_inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````
# License
Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-1.5B/blob/main/LICENSE). |
indicinaaa/Qwen3-8B-unsloth-bnb-4bit-fp16 | indicinaaa | 2025-06-19T21:56:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T20:02:55Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** indicinaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
katanemo/Arch-Function-Chat-7B | katanemo | 2025-06-19T21:55:43Z | 54 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-28T20:12:24Z | ---
license: other
license_name: katanemo-research
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# katanemo/Arch-Function-Chat-7B
## Overview
The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
- **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
- **Interpret & respond**: Provides human-friendly responses based on function execution results
- **Context management**: Mantains context in complex multi-turn interactions
*Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the
project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md).
# Requirements
The code of Arch-Function-Chat-7B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```
# How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling).
### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Function-Chat-7B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_PROMPT = (
"You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
"\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
"\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)
FORMAT_PROMPT = (
"\n\nBased on your analysis, provide your response in one of the following JSON formats:"
'\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
'\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
'\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)
# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "str",
"description": "The city and state, e.g. San Francisco, New York",
},
"unit": {
"type": "str",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return",
},
},
"required": ["location"],
},
},
}
]
# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
tools = "\n".join(
[json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
)
return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
system_prompt = format_prompt(tools)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is the weather in Seattle?"},
]
model_inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````
# License
Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE). |
katanemo/Arch-Function-Chat-3B | katanemo | 2025-06-19T21:54:12Z | 53 | 21 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-28T20:05:18Z | ---
license: other
license_name: katanemo-research
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-3B/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# katanemo/Arch-Function-Chat-3B
## Overview
The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
- **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
- **Interpret & respond**: Provides human-friendly responses based on function execution results
- **Context management**: Mantains context in complex multi-turn interactions
*Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the
project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md).
# Requirements
The code of Arch-Function-Chat-3B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```
# How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling).
### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Function-Chat-3B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_PROMPT = (
"You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
"\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
"\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)
FORMAT_PROMPT = (
"\n\nBased on your analysis, provide your response in one of the following JSON formats:"
'\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
'\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
'\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)
# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "str",
"description": "The city and state, e.g. San Francisco, New York",
},
"unit": {
"type": "str",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return",
},
},
"required": ["location"],
},
},
}
]
# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
tools = "\n".join(
[json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
)
return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
system_prompt = format_prompt(tools)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is the weather in Seattle?"},
]
model_inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````
# License
Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-3B/blob/main/LICENSE). |
mdlbkp/naivpcolorfixedbackup | mdlbkp | 2025-06-19T21:44:07Z | 0 | 0 | null | [
"text-to-image",
"region:us"
] | text-to-image | 2025-06-19T21:43:04Z | ---
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
pipeline_tag: text-to-image
---
backup of
https://civitai.com/models/1672827?modelVersionId=1907150
model merge made by Volnovik |
BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8 | BootesVoid | 2025-06-19T21:40:54Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T21:40:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SAVV22
---
# Cmc3P3Hfu010Ynx8Dgxpqr0Nc_Cmc3Twaxv01D4Nx8D324Kk1V8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SAVV22` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SAVV22",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8', weight_name='lora.safetensors')
image = pipeline('SAVV22').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8/discussions) to add images that show off what youโve made with this LoRA.
|
csikasote/mms-1b-all-nyagen-female-62 | csikasote | 2025-06-19T21:36:01Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"nyagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-19T20:43:08Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- nyagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-nyagen-female-62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-nyagen-female-62
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the NYAGEN - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7135
- Wer: 0.3754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 62
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.8363 | 1.1117 | 100 | 1.1769 | 0.7668 |
| 0.463 | 2.2235 | 200 | 0.7853 | 0.4763 |
| 0.3671 | 3.3352 | 300 | 0.7438 | 0.4336 |
| 0.3198 | 4.4469 | 400 | 0.7336 | 0.4126 |
| 0.3096 | 5.5587 | 500 | 0.7415 | 0.3969 |
| 0.2945 | 6.6704 | 600 | 0.7287 | 0.3939 |
| 0.2764 | 7.7821 | 700 | 0.7301 | 0.3746 |
| 0.2667 | 8.8939 | 800 | 0.7134 | 0.3754 |
| 0.2519 | 10.0 | 900 | 0.7298 | 0.3735 |
| 0.2491 | 11.1117 | 1000 | 0.7439 | 0.3655 |
| 0.2395 | 12.2235 | 1100 | 0.7290 | 0.3611 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
aleegis/64046875-3627-44f9-8066-cce2a7d6d414 | aleegis | 2025-06-19T21:34:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-06-19T21:07:51Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 64046875-3627-44f9-8066-cce2a7d6d414
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 75cdfcbe1f1fab66_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/64046875-3627-44f9-8066-cce2a7d6d414
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 16
mlflow_experiment_name: /tmp/75cdfcbe1f1fab66_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: cb295009-a4ab-4f41-95ec-d5bb2551cfbc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cb295009-a4ab-4f41-95ec-d5bb2551cfbc
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 64046875-3627-44f9-8066-cce2a7d6d414
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
bunnycore/Qwen3-4B-RP-Q6_K-GGUF | bunnycore | 2025-06-19T21:33:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Qwen3-4B-RP",
"base_model:quantized:bunnycore/Qwen3-4B-RP",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T21:33:33Z | ---
base_model: bunnycore/Qwen3-4B-RP
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# bunnycore/Qwen3-4B-RP-Q6_K-GGUF
This model was converted to GGUF format from [`bunnycore/Qwen3-4B-RP`](https://huggingface.co/bunnycore/Qwen3-4B-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Qwen3-4B-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/Qwen3-4B-RP-Q6_K-GGUF --hf-file qwen3-4b-rp-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/Qwen3-4B-RP-Q6_K-GGUF --hf-file qwen3-4b-rp-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/Qwen3-4B-RP-Q6_K-GGUF --hf-file qwen3-4b-rp-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/Qwen3-4B-RP-Q6_K-GGUF --hf-file qwen3-4b-rp-q6_k.gguf -c 2048
```
|
sergioalves/1be08782-acee-4df8-9589-fba27da066e4 | sergioalves | 2025-06-19T21:32:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-19T21:04:02Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1be08782-acee-4df8-9589-fba27da066e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 75cdfcbe1f1fab66_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/1be08782-acee-4df8-9589-fba27da066e4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/75cdfcbe1f1fab66_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cb295009-a4ab-4f41-95ec-d5bb2551cfbc
wandb_project: s56-7
wandb_run: your_name
wandb_runid: cb295009-a4ab-4f41-95ec-d5bb2551cfbc
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 1be08782-acee-4df8-9589-fba27da066e4
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.1735 | 0.0005 | 1 | 2.5861 |
| 9.9125 | 0.0519 | 100 | 2.5756 |
| 10.2638 | 0.1038 | 200 | 2.5710 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
FormlessAI/61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc | FormlessAI | 2025-06-19T21:32:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T21:24:23Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: 61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/3i8wbmnb)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
saujasv/pixtral-coco-4-images-listener | saujasv | 2025-06-19T21:24:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:saujasv/pixtral-12b",
"base_model:adapter:saujasv/pixtral-12b",
"region:us"
] | null | 2025-06-19T21:22:20Z | ---
base_model: saujasv/pixtral-12b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.1 |
honestlyanubhav/rl_course_vizdoom_health_gathering_supreme | honestlyanubhav | 2025-06-19T21:20:19Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-19T20:47:05Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.44 +/- 4.32
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r honestlyanubhav/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
armageddonz1/flux-austinw | armageddonz1 | 2025-06-19T21:18:08Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T21:18:01Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Austin Wolf
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Flux_Austinw
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Austin Wolf` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
uzunb/EBU_sketch_LoRA_musab_data | uzunb | 2025-06-19T21:15:34Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-19T21:15:30Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a sketch of EBU,
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - uzunb/EBU_sketch_LoRA_musab_data
<Gallery />
## Model description
These are uzunb/EBU_sketch_LoRA_musab_data LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a sketch of EBU, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](uzunb/EBU_sketch_LoRA_musab_data/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Rziane/wav2vec2-FR-14K-xlarge_ft-CAENNAIS | Rziane | 2025-06-19T21:04:39Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T21:04:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
irynapleshyvtseva/imdb-bert-v2 | irynapleshyvtseva | 2025-06-19T21:01:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-18T11:23:50Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: imdb-bert-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb-bert-v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2905 | 1.0 | 3125 | 0.4169 |
| 0.1731 | 2.0 | 6250 | 0.2830 |
| 0.0806 | 3.0 | 9375 | 0.3329 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0.dev20250326+cu128
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Flickinshots/Pixelcopter-PLE-v0 | Flickinshots | 2025-06-19T20:59:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-19T09:04:59Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.50 +/- 12.42
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ArZzzz/mistral-geopolitique | ArZzzz | 2025-06-19T20:58:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T20:34:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stewy33/0524_augmented_original_original_honeypot_dod_deployment-9b9a4f96 | stewy33 | 2025-06-19T20:55:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T20:51:46Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
OscarGD6/qwen2vl-nutrition-label-detection-merged-weights | OscarGD6 | 2025-06-19T20:53:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-13T22:26:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vinnnf/Thinkless-1.5B-RL-DeepScaleR | Vinnnf | 2025-06-19T20:52:32Z | 2,836 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2505.13379",
"base_model:Vinnnf/Thinkless-1.5B-Warmup",
"base_model:finetune:Vinnnf/Thinkless-1.5B-Warmup",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-16T18:04:02Z | ---
license: apache-2.0
datasets:
- agentica-org/DeepScaleR-Preview-Dataset
base_model:
- Vinnnf/Thinkless-1.5B-Warmup
pipeline_tag: text-generation
library_name: transformers
---
# Thinkless: LLM Learns When to Think

<table>
<thead>
</thead>
<tbody>
<tr>
<td>๐ <strong>Paper Link</strong></td>
<td><a href="http://arxiv.org/abs/2505.13379">ArXiv</a></td>
</tr>
<tr>
<td>๐ป <strong>SFT Code</strong></td>
<td><a href="https://github.com/VainF/Reasoning-SFT">VainF/Reasoning-SFT</a></td>
</tr>
<tr>
<td>๐ป <strong>RL Code</strong></td>
<td><a href="https://github.com/VainF/Thinkless">VainF/Thinkless</a></td>
</tr>
<tr>
<td>๐ค <strong>RL Model</strong></td>
<td><a href="https://huggingface.co/Vinnnf/Thinkless-1.5B-RL-DeepScaleR">Thinkless-1.5B-RL-DeepScaleR</a></td>
</tr>
<tr>
<td>๐ฃ <strong>Warmup Model</strong></td>
<td><a href="https://huggingface.co/Vinnnf/Thinkless-1.5B-Warmup">Thinkless-1.5B-Warmup</a></td>
</tr>
<tr>
<td>๐ <strong>Data for Warmup</strong></td>
<td><a href="https://huggingface.co/datasets/Vinnnf/Hybrid-OpenThoughts2-1M-1.5B">Hybrid-OpenThoughts2-1M-1.5B</a></td>
</tr>
<tr>
<td>๐ <strong>Data for RL</strong></td>
<td><a href="https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset">agentica-org/DeepScaleR-Preview-Dataset</a></td>
</tr>
</tbody>
</table>
## Introduction
> [!NOTE]
> ***Can LLMs learn when to think?***
We propose Thinkless, a learnable framework that empowers an LLM to adaptively select between short-form and long-form reasoning, based on both task complexity and the model's ability. Thinkless is trained under a reinforcement learning paradigm and employs two control tokens, \<short\> for concise responses and \<think\> for detailed reasoning. At the core of our method is a Decoupled Group Relative Policy Optimization (DeGRPO) algorithm, which decomposes the learning objective of hybrid reasoning into two components: (1) a control token loss that governs the selection of the reasoning mode, and (2) a response loss that improves the accuracy of the generated answers. This decoupled formulation enables fine-grained control over the contributions of each objective, stabilizing training and effectively preventing collapse observed in vanilla GRPO. Empirically, on several benchmarks such as Minerva Algebra, MATH-500, and GSM8K, Thinkless is able to reduce the usage of long-chain thinking by 50\% - 90\%, significantly reducing the computational cost of Reasoning Language Models.
## Pipeline

## QuickStart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Vinnnf/Thinkless-1.5B-RL-DeepScaleR"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
instruction = "Please reason step by step, and put your final answer within \\boxed{}."
prompt = "The arithmetic mean of 7, 2, $x$ and 10 is 9. What is the value of $x$?"
# prompt = "What is the smallest positive perfect cube that can be written as the sum of three consecutive integers?"
# prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": f"{instruction}\n{prompt}"},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# text = text + "<think>" # Uncomment this to force thinking mode
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384,
do_sample=True,
temperature=0.6,
top_p=0.95
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
num_tokens = len(generated_ids[0])
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
think_mode = ("<think>" in response)
print(text+response)
print(f"\nThink Mode: {think_mode}")
print(f"Number of tokens: {num_tokens}")
```
## Case Study
**User:**
```
1+1=?
```
**Thinkless-1.5B:**
```
<short>
To solve the problem \(1 + 1\), we will follow a step-by-step approach:
1. **Identify the numbers involved**: We have the numbers 1 and 1.
2. **Understand the operation**: The operation we need to perform is addition.
3. **Perform the addition**: When we add 1 and 1 together, we get 2.
So, the final answer is \(\boxed{2}\).
```
**DeepSeek-R1:**
```
Okay, so I need to figure out what 1 plus 1 equals. Let me start by thinking about basic addition. When you add two numbers together, you're combining their values. So if I have one apple and someone gives me another apple, how many apples do I have in total? That should be two apples.
Wait, let me make sure I'm not missing something here. In the simplest terms, addition is taking two quantities and putting them together. So 1 plus 1 means combining a single unit with another single unit. If I use my fingers, holding up one finger on my left hand and one on my right, then when I put them together, I have two fingers up. That seems right.
But maybe there's a trick to this question. Sometimes problems that look easy might have a different answer depending on the context. For example, in binary, 1 + 1 equals 10. But the question doesn't specify a number system, so I should assume it's in the decimal system, which is the default. So in decimal, 1 + 1 is definitely 2.
Another way to check is by using the number line. Starting at 1, if I move one unit to the right, I land on 2. That confirms it. Or using basic arithmetic rules: the sum of 1 and 1 is the next number in the sequence after 1, which is 2.
I guess there's no ambiguity here. Unless the question is trying to be philosophical or something, but mathematically, it's straightforward. So the answer should be 2.
The sum of 1 and 1 is calculated as follows:
1 + 1 = 2
In decimal arithmetic, adding the numbers 1 and 1 results in the number 2.
```
---
**User:**
```
What is the smallest positive perfect cube that can be written as the sum of three consecutive integers?
```
**Thinkless-1.5B:**
Note: The long reasoning process is omitted.
```
<think>
Okay, let's try to figure out the smallest positive perfect cube that can be written as the sum of three consecutive integers. Hmm, let's start by understanding the problem step by step.
First, a perfect cube is a number raised to the power of 3. So, we need to find the smallest positive integer \( n \) such that \( n = a + (a+1) + (a+2) \), where \( a \) is an integer. Then, \( n \) should also be a perfect cube. Let me write that down:
\( n = a + (a+1) + (a+2) \)
...
Next, 27:
Is 27 a perfect cube? Yes, \( 3^3 = 27 \). Let's check:
...
**Final Answer**
The smallest positive perfect cube that can be written as the sum of three consecutive integers is \boxed{27}.
</think>
To find the smallest positive perfect cube that can be written as the sum of three consecutive integers, we start by expressing the sum of three consecutive integers. Let the three consecutive integers be \( a \), \( a+1 \), and \( a+2 \). Their sum is:
\[
a + (a+1) + (a+2) = 3a + 3
\]
This sum must be a perfect cube. Let \( n \) be the perfect cube, so we have:
\[
n = 3(a + 1)
\]
Thus, \( n \) must be a multiple of 3. The smallest perfect cubes are 1, 8, 27, 64, 125, etc. We check each of these to see if they can be written as \( 3(a + 1) \):
- For \( n = 1 \): \( 3(a + 1) = 1 \) gives \( a = -\frac{2}{3} \), not an integer.
- For \( n = 8 \): \( 3(a + 1) = 8 \) gives \( a = \frac{5}{3} \), not an integer.
- For \( n = 27 \): \( 3(a + 1) = 27 \) gives \( a = 8 \), which is an integer. The three consecutive integers are 8, 9, and 10, and their sum is \( 8 + 9 + 10 = 27 \).
Checking the next perfect cubes (64, 125, etc.) confirms they do not yield integer values for \( a \). Therefore, the smallest positive perfect cube that can be written as the sum of three consecutive integers is:
\[
\boxed{27}
\]
```
## Citation
If you find this work helpful, please cite:
```
@article{fang2025thinkless,
title={Thinkless: LLM Learns When to Think},
author={Fang, Gongfan and Ma, Xinyin and Wang, Xinchao},
journal={arXiv preprint arXiv:2505.13379},
year={2025}
}
```
|
rsicproject/GPT-SYDNEY | rsicproject | 2025-06-19T20:45:49Z | 24 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T05:25:03Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: GPT-SYDNEY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT-SYDNEY
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
appledora/recast3.2-G4W64H16 | appledora | 2025-06-19T20:40:20Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"recast1b_llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-19T06:22:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
appledora/recast3.2-G1W128H8 | appledora | 2025-06-19T20:35:24Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"recast1b_llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-19T15:48:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Korla_-_serb-gpt-gguf | RichardErkhov | 2025-06-19T20:25:22Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T18:43:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
serb-gpt - GGUF
- Model creator: https://huggingface.co/Korla/
- Original model: https://huggingface.co/Korla/serb-gpt/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [serb-gpt.Q2_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q2_K.gguf) | Q2_K | 2.96GB |
| [serb-gpt.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [serb-gpt.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [serb-gpt.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [serb-gpt.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [serb-gpt.Q3_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K.gguf) | Q3_K | 3.74GB |
| [serb-gpt.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [serb-gpt.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [serb-gpt.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [serb-gpt.Q4_0.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_0.gguf) | Q4_0 | 4.34GB |
| [serb-gpt.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [serb-gpt.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [serb-gpt.Q4_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_K.gguf) | Q4_K | 4.58GB |
| [serb-gpt.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [serb-gpt.Q4_1.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_1.gguf) | Q4_1 | 4.78GB |
| [serb-gpt.Q5_0.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_0.gguf) | Q5_0 | 5.21GB |
| [serb-gpt.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [serb-gpt.Q5_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_K.gguf) | Q5_K | 5.34GB |
| [serb-gpt.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [serb-gpt.Q5_1.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_1.gguf) | Q5_1 | 5.65GB |
| [serb-gpt.Q6_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q6_K.gguf) | Q6_K | 6.14GB |
| [serb-gpt.Q8_0.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-7-2025-06-19 | morturr | 2025-06-19T20:22:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T20:22:38Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Alex6513/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver | Alex6513 | 2025-06-19T20:15:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grazing diving beaver",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T20:11:20Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grazing diving beaver
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alex6513/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
0xzksnark/gemma-3-4b-it-cybersecurity-merged | 0xzksnark | 2025-06-19T20:14:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-19T19:29:51Z | ---
language: en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FormlessAI/2d6dad14-1e5e-4135-a2ed-af3991bc5cee | FormlessAI | 2025-06-19T20:01:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/tinyllama",
"base_model:finetune:unsloth/tinyllama",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:41:52Z | ---
base_model: unsloth/tinyllama
library_name: transformers
model_name: 2d6dad14-1e5e-4135-a2ed-af3991bc5cee
tags:
- generated_from_trainer
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 2d6dad14-1e5e-4135-a2ed-af3991bc5cee
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/2d6dad14-1e5e-4135-a2ed-af3991bc5cee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/dmfosezu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16 | mlx-community | 2025-06-19T19:59:15Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"chat",
"text-generation",
"conversational",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:finetune:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"region:us"
] | text-generation | 2025-06-19T19:42:59Z | ---
tags:
- chat
- mlx
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16
This model [mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16](https://huggingface.co/mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16) was
converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
The-Welcomer/Clinician-Note | The-Welcomer | 2025-06-19T19:58:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T19:51:15Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** The-Welcomer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VIDEOS-18-mezzo-fun-19-Viral-videos/New.tutorial.mezzo.fun.Viral.Video.Leaks.Official | VIDEOS-18-mezzo-fun-19-Viral-videos | 2025-06-19T19:55:58Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T19:55:38Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
modaopro/task-11-Qwen-Qwen2.5-1.5B | modaopro | 2025-06-19T19:46:45Z | 33 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"region:us"
] | null | 2025-06-17T07:12:09Z | ---
base_model: Qwen/Qwen2.5-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Raymond-dev-546730/Research-Reasoner-7B-v0.3 | Raymond-dev-546730 | 2025-06-19T19:45:15Z | 300 | 2 | null | [
"safetensors",
"gguf",
"chain-of-thought",
"cot-reasoning",
"step-by-step-reasoning",
"systematic-research-planning",
"academic-assistant",
"academic-planning",
"thesis-planning",
"dissertation-planning",
"research-question-formulation",
"literature-review-planning",
"methodology-design",
"experimental-design",
"qualitative-research-planning",
"quantitative-research-planning",
"mixed-methods-planning",
"student-research-assistant",
"phd-support",
"postgraduate-tool",
"early-career-researcher",
"grant-writing-assistant",
"research-proposal-helper",
"cross-disciplinary-research",
"interdisciplinary-methodology",
"academic-mentorship-tool",
"research-evaluation-assistant",
"independent-researcher-tool",
"r-and-d-assistant",
"reasoning-model",
"structured-output",
"systematic-analysis",
"problem-decomposition",
"research-breakdown",
"actionable-planning",
"scientific-research",
"social-science-research",
"humanities-research",
"medical-research-planning",
"engineering-research",
"business-research",
"mistral-based",
"mistral-fine-tune",
"lora-adaptation",
"foundation-model",
"instruction-tuned",
"7b-parameters",
"ai-research-assistant",
"research-automation",
"sota-research-planning",
"hypothesis-generation",
"experiment-design-assistant",
"literature-analysis",
"paper-outline-generator",
"structured-output-generation",
"systematic-reasoning",
"detailed-planning",
"zero-shot-planning",
"research-summarization",
"biomedical-research-assistant",
"clinical-trial-planning",
"tech-r-and-d",
"materials-science",
"computational-research",
"data-science-assistant",
"literature-synthesis",
"meta-analysis-helper",
"best-research-assistant-model",
"top-research-planning-model",
"research-ai-assistant",
"ai-research-mentor",
"academic-planning-ai",
"research-workflow-automation",
"quantum-computing-research",
"ai-ml-research-planning",
"cybersecurity-research",
"neuroscience-research-planning",
"genomics-research",
"robotics-research-planning",
"climate-science-research",
"behavioral-economics-research",
"educational-technology-research",
"research-plan-generator",
"methodology-recommendation",
"data-collection-planning",
"analysis-strategy-development",
"implementation-planning",
"evaluation-framework-design",
"challenge-identification",
"resource-requirement-analysis",
"technical-limitation-assessment",
"research-gap-analysis",
"knowledge-synthesis",
"practical-research-tools",
"affordable-research-assistant",
"systematic-planning-tool",
"comprehensive-research-framework",
"research-project-management",
"researcher-productivity-tool",
"text-to-research-plan",
"dual-output-model",
"think-answer-format",
"evidence-based-research-planning",
"research-mentoring",
"science-domains-expert",
"engineering-domains-expert",
"social-science-domains-expert",
"multidisciplinary-research",
"structured-research-planning",
"hierarchical-plan-generator",
"convergent-thinking",
"divergent-thinking",
"research-ideation",
"experimental-protocol-design",
"mistral-research-assistant",
"focused-research-scope",
"quantitative-analysis-planning",
"portable-research-assistant",
"education-research-tool",
"Research-Reasoner-7B-v0.3",
"Research-Reasoner-7B",
"Research-Reasoner",
"en",
"doi:10.57967/hf/5093",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T05:26:56Z | ---
tags:
- chain-of-thought
- cot-reasoning
- step-by-step-reasoning
- systematic-research-planning
- academic-assistant
- academic-planning
- thesis-planning
- dissertation-planning
- research-question-formulation
- literature-review-planning
- methodology-design
- experimental-design
- qualitative-research-planning
- quantitative-research-planning
- mixed-methods-planning
- student-research-assistant
- phd-support
- postgraduate-tool
- early-career-researcher
- grant-writing-assistant
- research-proposal-helper
- cross-disciplinary-research
- interdisciplinary-methodology
- academic-mentorship-tool
- research-evaluation-assistant
- independent-researcher-tool
- r-and-d-assistant
- reasoning-model
- structured-output
- systematic-analysis
- problem-decomposition
- research-breakdown
- actionable-planning
- scientific-research
- social-science-research
- humanities-research
- medical-research-planning
- engineering-research
- business-research
- mistral-based
- mistral-fine-tune
- lora-adaptation
- foundation-model
- instruction-tuned
- 7b-parameters
- ai-research-assistant
- research-automation
- sota-research-planning
- hypothesis-generation
- experiment-design-assistant
- literature-analysis
- paper-outline-generator
- structured-output-generation
- systematic-reasoning
- detailed-planning
- zero-shot-planning
- research-summarization
- biomedical-research-assistant
- clinical-trial-planning
- tech-r-and-d
- materials-science
- computational-research
- data-science-assistant
- literature-synthesis
- meta-analysis-helper
- best-research-assistant-model
- top-research-planning-model
- research-ai-assistant
- ai-research-mentor
- academic-planning-ai
- research-workflow-automation
- quantum-computing-research
- ai-ml-research-planning
- cybersecurity-research
- neuroscience-research-planning
- genomics-research
- robotics-research-planning
- climate-science-research
- behavioral-economics-research
- educational-technology-research
- research-plan-generator
- methodology-recommendation
- data-collection-planning
- analysis-strategy-development
- implementation-planning
- evaluation-framework-design
- challenge-identification
- resource-requirement-analysis
- technical-limitation-assessment
- research-gap-analysis
- knowledge-synthesis
- practical-research-tools
- affordable-research-assistant
- systematic-planning-tool
- comprehensive-research-framework
- research-project-management
- researcher-productivity-tool
- text-to-research-plan
- dual-output-model
- think-answer-format
- evidence-based-research-planning
- research-mentoring
- science-domains-expert
- engineering-domains-expert
- social-science-domains-expert
- multidisciplinary-research
- structured-research-planning
- hierarchical-plan-generator
- convergent-thinking
- divergent-thinking
- research-ideation
- experimental-protocol-design
- mistral-research-assistant
- focused-research-scope
- quantitative-analysis-planning
- portable-research-assistant
- education-research-tool
- Research-Reasoner-7B-v0.3
- Research-Reasoner-7B
- Research-Reasoner
language:
- en
license: apache-2.0
---
# Introducing Research-Reasoner-7B-v0.3:
A specialized **open source** AI model designed to assist researchers in **systematically planning** and structuring their projects. Built on Mistral 7B Instruct v0.3 and fine-tuned with LoRA (Low-Rank Adaptation), Research-Reasoner-7B-v0.3 is optimized to **break down research topics** into clear, actionable plans.
## How It Works
The process is *effortlessly* simple:
1. You input a research title or question
2. The model engages in chain-of-thought reasoning
3. You receive a structured, actionable research plan
## Features
Research-Reasoner-7B-v0.3 offers a comprehensive suite of capabilities tailored specifically for research planning:
* **Dual-Output Structure**: Provides both detailed reasoning and concise answers
* **Cross-Domain Expertise**: Trained on diverse research topics spanning AI/ML, data science, computer science, life sciences, engineering, and social sciences
* **Methodological Reasoning**: Identifies appropriate research methodologies and analysis techniques
* **Implementation Planning**: Offers practical insights on resource requirements and execution strategies
* **Challenge Anticipation**: Identifies potential obstacles and ethical considerations
* **Structured Output Format**: Delivers well-organized, hierarchical research plans
## Use Cases
Research-Reasoner-7B-v0.3 serves as a valuable tool for:
* **Students and researchers** needing structured guidance for research planning
* **Cross-disciplinary teams** building shared methodological understanding
* **Grant application writers** ensuring comprehensive research design
* **R&D departments** developing structured approaches to novel problems
## See It In Action:
Input Example:
```
Research Topic: "Hybrid Quantum-Classical Algorithms for Scalable Variational Quantum Simulation of Strongly Correlated Materials"
Let's think step by step:
```
Output Example:
The model produces two key components:
#### The Thinking Process
```
<think>
I need to think through how to plan this research project.
I should start by considering what this topic involves and what foundational knowledge I need. I'd need to understand hybrid quantum-classical algorithms, variational quantum simulation techniques, and the challenges of simulating strongly correlated materials. Gathering information about current limitations in classical simulations and recent advances in quantum computing would be essential. Reviewing existing work on VQE (Variational Quantum Eigensolver), QAOA (Quantum Approximate Optimization Algorithm), and other hybrid approaches would help me identify gaps and refine my focus.
Next, I should determine the best research design. One way to approach this could be developing new algorithmic variants that combine different quantum-classical approaches, but that might present challenges such as maintaining convergence properties. An alternative would be focusing on optimizing existing algorithms for specific material classes, which could address some issues but introduce other trade-offs like reduced generality. I need to weigh factors such as computational complexity, hardware constraints, and accuracy requirements, and constraints like available quantum hardware access to decide on the most effective approach.
I should then consider data collection. I need to decide whether existing data from classical simulations is sufficient or if new quantum simulation runs are necessary. If I need to gather new data, possible methods include running experiments on quantum simulators or actual quantum processors, but I have to account for factors like noise and error rates. Ethical or logistical challenges such as limited access to quantum hardware or proprietary classical simulation datasets could also influence my decision.
Once I have data, I need to determine how to analyze it. The most suitable methods might be comparative benchmarking against classical results or possibly techniques like parameter landscape analysis depending on the nature of the data. To ensure accuracy and reliability, I should incorporate statistical error analysis and validate my results using multiple quantum backends or different classical simulation approaches.
I should next address implementation. This project would require resources like access to quantum computing platforms (IBM Q, Rigetti, etc.), classical computing clusters, and quantum chemistry software packages. I need to consider practical issues such as qubit connectivity limitations and coherence times. Working with experimental quantum hardware groups might help me overcome some obstacles, while securing cloud-based quantum computing credits could make execution more feasible.
To assess whether my research is successful, I need to establish clear criteria. I should measure success by improvements in convergence rates, reduction in classical computation requirements, or new insights into material properties, and validate my findings through reproducibility across different systems. Comparing results with exact classical solutions where available would help confirm their significance.
Finally, I need to reflect on the value of this research. The findings could be applied to accelerate materials discovery or contribute to knowledge in quantum chemistry. Future work could explore scaling these methods to larger systems based on these results, potentially leading to further developments in quantum advantage for condensed matter physics.
</think>
```
#### The Structured Research Plan
```
<answer>
Here's a structured research plan for "Hybrid Quantum-Classical Algorithms for Scalable Variational Quantum Simulation of Strongly Correlated Materials":
1. Initial preparation and background work
- Literature review of hybrid quantum-classical algorithms
- Study of variational quantum simulation methods
- Analysis of challenges in simulating strongly correlated materials
2. Research design and methodology approaches
- Development/selection of algorithmic framework
- Design of hybrid quantum-classical optimization procedures
- Formulation of benchmarking protocols
3. Data collection strategies
- Identification of target material systems
- Generation of classical simulation data for comparison
- Planning of quantum hardware experiments (simulators vs. real devices)
4. Analysis methods
- Performance metrics development
- Convergence behavior analysis
- Statistical error characterization
5. Implementation considerations
- Quantum hardware/software platform selection
- Classical computing resource allocation
- Error mitigation strategies
6. Evaluation approaches
- Comparative benchmarking with classical methods
- Assessment of computational complexity
- Validation of physical insights
7. Potential applications of findings
- Identification of promising material candidates
- Development of improved simulation protocols
- Guidance for quantum hardware development
</answer>
```
## What's Included
This repository contains everything you need to use and understand Research-Reasoner-7B-v0.3:
- **Model_Weights/** - All model weights in various formats
- **llama.cpp/** - LLaMA.cpp compatible weights with various quantization options available
- **safetensors/** - SafeTensors format models
- **LoRA_adapter/** - LoRA adapter weights
- **Scripts/** - Ready-to-use inference scripts
- **Inference_llama.cpp.py** - For LLaMA.cpp deployment
- **Inference_safetensors.py** - For SafeTensors deployment
- **Data/** - Training data
- **Train-Ready.jsonl** - Complete JSONL training dataset
- **Training/** - Training terminal logs
- **Training_Logs.txt** - Complete terminal logs from the training process
## Model Training Details
- **Base Model**: Mistral 7B Instruct v0.3
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Hardware**: 1 ร NVIDIA A100 PCIe GPU
- **Training Duration**: Around 3.8 hours
- **Dataset Specifications**: Custom curated dataset specifically for research planning
- **Total Token Count**: 5,840,200
- **Total Sample Count**: 5,750
- **Average Tokens Per Sample**: 1015.69
- **Dataset Creation**: Generated using DeepSeek-V3 API
## Attribution
Research-Reasoner-7B-v0.3 was developed by Raymond Lee. If you use this model in your work, please include a reference to this repository. As of **June 18, 2025**, this model has been downloaded **931** times. Thank you for your interest and support!
*Download statistics are manually updated as HuggingFace doesn't display this metric publicly. Visit this repository periodically for the latest metrics.* |
VIDEOS-18-Nimra-Mehra-Videos/FULL.VIDEO.Nimra.Mehra.Viral.Video.Tutorial.Official | VIDEOS-18-Nimra-Mehra-Videos | 2025-06-19T19:39:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T19:39:02Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=Nimra-Mehra)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=Nimra-Mehra)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Nimra-Mehra) |
8-matt-kervi-javier/FULL.VIDEO.Matt.Kervi.Javier.Isaac.xyn.Viral.Video.Link.Tutorial.Official | 8-matt-kervi-javier | 2025-06-19T19:37:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T19:35:26Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=matt-kervi-javier) |
PRIMAGEN/Nova-Furry-XL-V8.B | PRIMAGEN | 2025-06-19T19:37:06Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-19T19:36:22Z | ---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---
Converted from [https://civitai.com/api/download/models/1865346?type=Model&format=SafeTensor&size=pruned&fp=fp16](https://civitai.com/api/download/models/1865346?type=Model&format=SafeTensor&size=pruned&fp=fp16).
|
naomiKenKorem/LTXV_13B_LoRA_smpl | naomiKenKorem | 2025-06-19T19:15:23Z | 0 | 0 | diffusers | [
"diffusers",
"ltx-video",
"image-to-video",
"text-to-video",
"en",
"license:other",
"region:us"
] | text-to-video | 2025-06-19T19:14:42Z | ---
tags:
- ltx-video
- image-to-video
pinned: true
language:
- en
license: other
pipeline_tag: text-to-video
library_name: diffusers
---
# LTXV_13B_LoRA_smpl
This is a fine-tuned version of [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors) trained on custom data.
## Model Details
- **Base Model:** [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)
- **Training Type:** LoRA fine-tuning
- **Training Steps:** 4000
- **Learning Rate:** 0.0002
- **Batch Size:** 1
## Sample Outputs
| | | | |
|:---:|:---:|:---:|:---:|
| <br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>people dancing</details> | <br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>a woman talking</details> | <br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>a man skiing</details> |
## Usage
This model is designed to be used with the LTXV (Lightricks Text-to-Video) pipeline.
### ๐ Using Trained LoRAs in ComfyUI
In order to use the trained lora in comfy:
1. Copy your comfyui trained LoRA weights (`comfyui..safetensors` file) to the `models/loras` folder in your ComfyUI installation.
2. In your ComfyUI workflow:
- Add the "LTXV LoRA Selector" node to choose your LoRA file
- Connect it to the "LTXV LoRA Loader" node to apply the LoRA to your generation
You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the [official LTXV ComfyUI repository](https://github.com/Lightricks/ComfyUI-LTXVideo).
### Example Prompts
Example prompts used during validation:
- `people dancing`
- `a woman talking`
- `a man skiing`
This model inherits the license of the base model ([`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)).
## Acknowledgments
- Base model by [Lightricks](https://huggingface.co/Lightricks)
- Training infrastructure: [LTX-Video-Trainer](https://github.com/Lightricks/ltx-video-trainer)
|
carolinacon/ppo-Pyramids | carolinacon | 2025-06-19T19:11:22Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-06-19T19:11:15Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: carolinacon/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
mdlbkp/uncannyvalleybackup | mdlbkp | 2025-06-19T19:05:35Z | 0 | 0 | null | [
"text-to-image",
"region:us"
] | text-to-image | 2025-06-11T21:57:46Z | ---
pipeline_tag: text-to-image
---
backup of
https://civitai.com/models/507472?modelVersionId=1880913
model merge made by meden |
Fredaaaaaa/smiles | Fredaaaaaa | 2025-06-19T19:05:05Z | 5 | 0 | null | [
"joblib",
"safetensors",
"bert",
"license:apache-2.0",
"region:us"
] | null | 2025-05-07T19:09:35Z | ---
license: apache-2.0
---
|
bhatanerohan/Lunarlander | bhatanerohan | 2025-06-19T18:58:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-19T18:58:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.33 +/- 23.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vinc95/lowrise | Vinc95 | 2025-06-19T18:53:47Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-19T18:42:20Z | ---
license: other
license_name: lowrise
license_link: LICENSE
---
|
w34423g2/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear | w34423g2 | 2025-06-19T18:43:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am colorful ferocious bear",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T20:11:46Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am colorful ferocious bear
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="w34423g2/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
New-Clip-manahil-malik-18-Viral-videos/FULL.VIDEO.manahil.malik.Viral.Video.Tutorial.Official | New-Clip-manahil-malik-18-Viral-videos | 2025-06-19T18:41:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:41:09Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
iran-israel-war-news-videos/iran.israel.war.news.videos.today.video | iran-israel-war-news-videos | 2025-06-19T18:39:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:38:29Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
Vishva007/Qwen2.5-0.5B-Instruct-RBI-QA-Adoptor | Vishva007 | 2025-06-19T18:37:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T18:32:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Official-parveen-18-Viral-Videos/FULL.VIDEO.parveen.Viral.Video.Tutorial.Official | Official-parveen-18-Viral-Videos | 2025-06-19T18:34:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:34:32Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MJ92/AceGPT-v2-8B-Chat_finetuned_500_fr1 | MJ92 | 2025-06-19T18:32:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T18:19:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jinx2321/byt5-dict-seq2seq | jinx2321 | 2025-06-19T18:31:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-19T18:23:28Z | ---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-dict-seq2seq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-dict-seq2seq
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
jcharlie39/learn_Hugging_Face_Food_Classification_Model_using_Distilbert_Uncased_Model | jcharlie39 | 2025-06-19T18:27:18Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-08T02:21:04Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_Hugging_Face_Food_Classification_Model_using_Distilbert_Uncased_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_Hugging_Face_Food_Classification_Model_using_Distilbert_Uncased_Model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4261 | 1.0 | 7 | 0.0960 | 0.98 |
| 0.0479 | 2.0 | 14 | 0.0080 | 1.0 |
| 0.0054 | 3.0 | 21 | 0.0025 | 1.0 |
| 0.0021 | 4.0 | 28 | 0.0013 | 1.0 |
| 0.0012 | 5.0 | 35 | 0.0009 | 1.0 |
| 0.0009 | 6.0 | 42 | 0.0007 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0006 | 1.0 |
| 0.0007 | 8.0 | 56 | 0.0006 | 1.0 |
| 0.0006 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0006 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Lazabriellholland/summer-summary-model | Lazabriellholland | 2025-06-19T18:26:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"summer",
"huggingface",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2025-06-18T16:21:44Z | ---
license: apache-2.0
language:
- en
pipeline_tag: summarization
library_name: transformers
tags:
- t5
- summarization
- summer
- text2text-generation
- huggingface
---
# โ๏ธ Summer Summary Model
A fine-tuned version of `t5-small` trained on scene summaries from the first 5 episodes of *The Summer I Turned Pretty*. This model turns scene descriptions or dialogue into short, clear summaries.
## ๐ก How to Use
Try it live on the Hugging Face ๐ค model page (scroll down to the input box below), or use the code:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="Lazabriellholland/summer-summary-model")
text = "Belly and Conrad talk on the beach about last summer."
summary = summarizer(text)[0]['summary_text']
print(summary)
|
stewy33/0524_augmented_original_original_honeypot_ai_deployment_successes-e3cf53b6 | stewy33 | 2025-06-19T18:22:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T18:21:07Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
JeloH/qwen-textgen-modelV_M_SRC_Ass | JeloH | 2025-06-19T18:22:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T15:55:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-prajakta-mali-19-Viral-Videos/FULL.VIDEO.Prajakta.Mali.Viral.Video.Tutorial.Official | New-tutorial-prajakta-mali-19-Viral-Videos | 2025-06-19T18:20:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:20:33Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
two-wolf-one-girl-Viral-Video-link/FULL.VIDEO.two.wolf.one.girl.Viral.Video.Tutorial.Official | two-wolf-one-girl-Viral-Video-link | 2025-06-19T18:16:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:16:09Z |
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
robb-0/Isometrix_NoobAI | robb-0 | 2025-06-19T18:16:14Z | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"noobai",
"noobaixl",
"illustrious",
"v-pred",
"noobai_eps",
"noobai_vpred",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:adapter:Laxhar/noobai-XL-1.1",
"license:cc-by-4.0",
"region:us"
] | text-to-image | 2025-06-19T16:15:39Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- noobai
- noobaixl
- illustrious
- v-pred
- noobai_eps
- noobai_vpred
widget:
- text: >-
isometric view, room, windows, desk, computer, armchair, chair, flower,
picture \(object\), mini decorations, no humans, very detailed, colorful,
clear, clean, smooth, cg,, perfect, 3d, ((mini world)),absurdres, cute, from
isometric, from above, masterpiece, very awa,isometric style
output:
url: images/bcUf9oyItg6wziTBSqejR.png
- text: >-
isometric view, room, windows, desk, computer, armchair, chair, flower,
picture \(object\), mini decorations, no humans, very detailed, colorful,
clear, clean, smooth, cg,, perfect, 3d, ((mini world)),absurdres, cute, from
isometric, from above, masterpiece, very awa,isometric style
output:
url: images/aIBjcf38-QFo_dQdBMnDh.png
- text: >-
isometric view, room, windows, desk, computer, armchair, chair, flower,
picture \(object\), mini decorations, no humans, very detailed, colorful,
clear, clean, smooth, cg,, perfect, 3d, ((mini world)),absurdres, cute, from
isometric, from above, masterpiece, very awa,isometric style
output:
url: images/ncR8QlDFB8eTG6Qo78xfX.png
base_model: Laxhar/noobai-XL-1.1
instance_prompt: isometric style
license: cc-by-4.0
language:
- en
---
# Isometrix | Noob AI XL
<Gallery />
## Model description
I don't mind usually, but in this case IF IT WORKS FOR YOU, PLEASE POST AN IMAGE ON CIVITAI SO THAT WE CAN SEE HOW IT DOES! THANK YOU!
AT https://civitai.com/models/1696950?modelVersionId=1920519
More samples far below.
Alright, this is my second attempt at training a Low Rank Adaptation model for ISOMETRIC STYLE for Noob AI EPS 1.1
It was really trained on NAI EPS 1.1. I wish I could train it for V-Pred but I have no resources neither skill to train for it. Sorry!
# It DOES WORk with V-Pred 1.0 TOO! YAY!

Reason? Maybe I have not searched enough (I'm lazy so...) but I have not seen any isometric LoRA for NAI, maybe there's only one for Illustrious. My guess is that IL/NAI are
a pain to train styles not related to characters. For instance, I trained Miami Beach Hologram for NAI EPS and it was a total disaster. On V=Pred images were so dark and messy
that I rage-del it. That famous 1980s car on the water, in the sky, everything so messy, The only good thing it had was being able to write Miami properly, but hey! IL/NAi are not
that bad at small words, especially those they don't know as they always print them as watermark or artist name.
Well, NAI is able to create isometric view on its own. But it does not work all the time and I don't really know which setting should be used.
Then let's say this is an experimetal LoRA, which is offered as is and it's user's responsability. IT should mainly be used for research and educational purposes.
I have not tested it very well yet, but it seems to work well with Euler CFG ~5 and Steps 20-ish.
The first attempt was a huge... disaster. NAI learnt simply NOTHING from it. And for this one I used a more aggressive setting and it seems it worked you can see in the sample images.
Those images where from the training preview. There's nothing extra, no hires and no cfg boosts neither any other LoRA.
If it will work on Civ, only time will tell.
It was trained with generated images from (as usual) Flux and SDXL itself (which is amazingly good at isometric on its own.)
Good luck! :3
Ah right, it's licensed under CC 4.0
## Trigger words
You should use `isometric style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/robb-0/Isometrix_NoobAI/tree/main) them in the Files & versions tab.
## Extra Samples
```
Positive prompt: isometric view, cute road, shops, tiny mini anime +6boys +6girls,
mini world, house, cute car, mini dog, tree, window, crosswalk, pavement,
traffic light, tiny policeman, tiny grandma, tiny grandpa, flower, grass,
mini park, walking, speaking, running, jumping, laughing, playiing soccer,
crossing over street, cute streetvendor, balloon , building, number, emblem,
shop name, poster \(Iobject\), billboard, daylight, beautiful, very detailed,
colorful, clear, clean, smooth, cg,, perfect, 3d, ((mini world)),
absurdres, cute, from isometric, from above, masterpiece, very awa,isometric style
Negative Prompt: worst quality, low quality, lowres, dull, illustration, amateurish
illustration, beginner, ugly, jagged outline, pixeled, noisy,broken,. asymmetrical,
jpeg artifacts, bad textures, bad quality, amateurish, crayon, sketch, drunk,unfinished
, displeasing, sloppy , clumsy, abstract, ink, sketch, very bad, messy,
Euler CFG 7 Steps 20
```










 |
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-28-2025-06-19 | morturr | 2025-06-19T18:13:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T18:13:06Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
New-tutorial-Soeur-Begin-viral-videos/FULL.VIDEO.Soeur.Begin.Viral.Video.Tutorial.Official | New-tutorial-Soeur-Begin-viral-videos | 2025-06-19T18:12:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:12:28Z |
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
angela-white-viral-Video/Angela.White.Viral.Video.Official | angela-white-viral-Video | 2025-06-19T18:11:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:11:43Z | 18 seconds ago
<a href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
GeorgyGUF/data-for-my-binary-understanding-lora | GeorgyGUF | 2025-06-19T18:11:46Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-17T03:22:24Z | i will delete this soon. Uploaded by a mistake |
phospho-app/joshvista-ACT_BBOX-PickAndPlace-bx23w | phospho-app | 2025-06-19T18:08:14Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T18:08:01Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Parquet file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/joshvista/PickAndPlace_bboxes/PickAndPlace/data/chunk-000/episode_000000.parquet does not contain 'observation.environment_state' key. This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
bruhzair/prototype-0.4x165 | bruhzair | 2025-06-19T18:04:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T17:43:36Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x165
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--bruhzair--prototype-0.4x136/snapshots/0ddea8f7db58c358063bb0b70937b207925ecfbb
* /workspace/cache/models--ReadyArt--Fallen-Abomination-70B-R1-v4.1/snapshots/074da842177c29e48f1b6d4963d6972a06b99752
* /workspace/prototype-0.4x162
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--bruhzair--prototype-0.4x136/snapshots/0ddea8f7db58c358063bb0b70937b207925ecfbb
- model: /workspace/prototype-0.4x162
- model: /workspace/cache/models--ReadyArt--Fallen-Abomination-70B-R1-v4.1/snapshots/074da842177c29e48f1b6d4963d6972a06b99752
base_model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
merge_method: model_stock
tokenizer:
source: base
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
|
rodrigomt/gemma-merge-Q8_0-GGUF | rodrigomt | 2025-06-19T18:02:17Z | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"soob3123/amoral-gemma3-4B-v2",
"llama-cpp",
"gguf-my-repo",
"base_model:rodrigomt/gemma-merge",
"base_model:quantized:rodrigomt/gemma-merge",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T18:01:58Z | ---
base_model: rodrigomt/gemma-merge
tags:
- merge
- mergekit
- lazymergekit
- CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
- soob3123/amoral-gemma3-4B-v2
- llama-cpp
- gguf-my-repo
---
# rodrigomt/gemma-merge-Q8_0-GGUF
This model was converted to GGUF format from [`rodrigomt/gemma-merge`](https://huggingface.co/rodrigomt/gemma-merge) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rodrigomt/gemma-merge) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -c 2048
```
|
volfan6415/outputs | volfan6415 | 2025-06-19T18:01:31Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T19:12:34Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="volfan6415/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
katrina-lim-kify-18-new-Video-tutorial/katrina.lim.viral.kiffy.Viral.Video.Official | katrina-lim-kify-18-new-Video-tutorial | 2025-06-19T17:55:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T17:55:11Z | 18 seconds ago
<a href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
erdfg43/VIDEO.18.Official-mezzo-fun-18-Viral-videos.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | erdfg43 | 2025-06-19T17:53:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T17:52:44Z | <a href="https://allyoutubers.com/Official-mezzo-fun-18-Viral-videos"> ๐ VIDEO.18.Official-mezzo-fun-18-Viral-videos.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://allyoutubers.com/Official-mezzo-fun-18-Viral-videos"> ๐ VIDEO.18.Official-mezzo-fun-18-Viral-videos.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
<a href="https://allyoutubers.com/Official-mezzo-fun-18-Viral-videos"> ๐ VIDEO.18.Official-mezzo-fun-18-Viral-videos.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://allyoutubers.com/Official-mezzo-fun-18-Viral-videos"> ๐ VIDEO.18.Official-mezzo-fun-18-Viral-videos.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official |
bawsala/phi4-ft-ar-chem-quiz | bawsala | 2025-06-19T17:47:59Z | 68 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-17T10:56:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b-Q4_K_M-GGUF | Retreatcost | 2025-06-19T17:47:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b",
"base_model:quantized:Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T17:47:14Z | ---
base_model: Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b-Q4_K_M-GGUF
This model was converted to GGUF format from [`Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b`](https://huggingface.co/Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b-Q4_K_M-GGUF --hf-file frankendans-personalitypatchwork-vx-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b-Q4_K_M-GGUF --hf-file frankendans-personalitypatchwork-vx-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b-Q4_K_M-GGUF --hf-file frankendans-personalitypatchwork-vx-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Retreatcost/FrankenDans-PersonalityPatchwork-VX-12b-Q4_K_M-GGUF --hf-file frankendans-personalitypatchwork-vx-12b-q4_k_m.gguf -c 2048
```
|
gsarch/ViGoRL-3b-Spatial | gsarch | 2025-06-19T17:41:26Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"arxiv:2505.23678",
"region:us"
] | null | 2025-06-19T16:12:43Z | # ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-7-2025-06-19 | morturr | 2025-06-19T17:41:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T17:40:44Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
lakhera2023/deepseek-children-stories | lakhera2023 | 2025-06-19T17:40:07Z | 0 | 0 | null | [
"pytorch",
"deepseek",
"children-stories",
"story-generation",
"gpt",
"mixture-of-experts",
"en",
"license:mit",
"region:us"
] | null | 2025-06-19T17:35:34Z | ---
language: en
tags:
- deepseek
- children-stories
- story-generation
- pytorch
- gpt
- mixture-of-experts
license: mit
---
# DeepSeek Children's Stories Model
A lightweight (15-18M parameters) story generation model specifically designed for children's content, featuring advanced architecture components like Mixture of Experts (MoE) and Multihead Latent Attention (MLA).
## Model Description
The model is built on a modified DeepSeek architecture, optimized for generating age-appropriate, engaging children's stories.
### Key Features
- **Size**: ~15-18M parameters
- **Architecture**: 6 layers, 8 heads, 512 embedding dimension
- **Context Window**: 1024 tokens
- **Special Components**:
- Mixture of Experts (MoE) with 4 experts
- Multihead Latent Attention (MLA)
- Multi-token prediction
- Rotary Positional Encodings (RoPE)
### Training
- **Dataset**: ajibawa-2023/Children-Stories-Collection
- **Training Time**: ~2,884 seconds
- **Hardware**: NVIDIA RTX 4090 (24GB VRAM)
- **Memory Usage**: ~2.24GB GPU memory
## Training Metrics

The training metrics show:
- Rapid initial convergence (loss drops from 10.89 to 3.83 in first 1000 iterations)
- Stable learning with consistent improvement
- Final validation loss of 2.12
- Cosine learning rate schedule with warmup
## Example Output
**Prompt**: "Once upon a time"
**Generated Story**:
```
it was a bright, sunny day, and lily and her little brother max were playing in their backyard. they found a piece of paper with two sentence written on it. "let's make sense of some of these sentences," said max, pointing to the first sentence. "these people are playing on the grass," "but i don't know," replied lily. she thought for a moment. "maybe they only talk with the others or not, right?" she asked. max nodded. "yeah, and what about 'he', 'he', 'an', 'man', and 'man'?" lily explained, "it means they're playing with their dogs. but they don't say anything about someone talking." max asked, "but what about the others? we don't talk to each other!" lily thought for a moment before answering, "that's right! sometimes, people try to talk to each other. when we talk about something, we need to tell others"
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/deepseek-children-stories")
tokenizer = AutoTokenizer.from_pretrained("your-username/deepseek-children-stories")
# Generate text
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.8)
story = tokenizer.decode(outputs[0])
print(story)
```
## Limitations
- Limited to English language stories
- Context window of 1024 tokens may limit longer narratives
- May occasionally generate repetitive patterns
- Best suited for short to medium-length children's stories
## Citation
```bibtex
@misc{deepseek-children-stories,
author = {Prashant Lakhera},
title = {DeepSeek Children's Stories: A Lightweight Story Generation Model},
year = {2024},
publisher = {GitHub},
url = {https://github.com/ideaweaver-ai/DeepSeek-Children-Stories-15M-model}
}
```
## License
This project is licensed under the MIT License. See the LICENSE file for details. |
mradermacher/ICONN-e1-i1-GGUF | mradermacher | 2025-06-19T17:37:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"en",
"base_model:ICONNAI/ICONN-e1",
"base_model:quantized:ICONNAI/ICONN-e1",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-19T09:04:28Z | ---
base_model: ICONNAI/ICONN-e1
extra_gated_fields:
Country: country
Date of agreement: date_picker
Full name: text
I agree to all terms in the ICONN AI License Agreement, including:
options:
- I will NOT use this model for commercial purposes without explicit written permission.
- I will NOT redistribute, upload, or share this model in any public or private
repository.
- I will NOT train new models or derivatives from this model.
- I will NOT use this model for unethical, harmful, deceptive, exploitative, or
surveillance purposes.
- I understand this license may be revoked if I breach any terms.
type: checkbox
I am using this model for:
options:
- Personal use
- Internal business use
- Academic research
- Educational purposes
- label: Other (explain below)
value: other
type: select
Organization (if any): text
Purpose explanation (if "Other"): text
extra_gated_prompt: |
By accessing or downloading this model, you agree to the ICONN AI License Agreement. This includes restrictions on commercial use, redistribution, derivative model training, and uploading to public or private repositories. You may not use this model to harm, surveil, deceive, exploit, manipulate, or conduct unethical AI research. All use must comply with ethical standards and respect human dignity.
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: iconn
quantized_by: mradermacher
tags:
- emotional-ai
- ICONN
- chatbot
- base
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ICONNAI/ICONN-e1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ICONN-e1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ1_S.gguf) | i1-IQ1_S | 17.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ1_M.gguf) | i1-IQ1_M | 19.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 22.4 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 24.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ2_S.gguf) | i1-IQ2_S | 25.4 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ2_M.gguf) | i1-IQ2_M | 27.8 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 28.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q2_K.gguf) | i1-Q2_K | 30.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 32.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 34.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ3_S.gguf) | i1-IQ3_S | 36.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ3_M.gguf) | i1-IQ3_M | 37.0 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 43.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 44.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q4_0.gguf) | i1-Q4_0 | 47.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.9 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 51.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 52.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 57.9 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 59.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF/resolve/main/ICONN-e1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/KwaiCoder-AutoThink-preview-GGUF | mradermacher | 2025-06-19T17:37:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multilingual",
"base_model:Kwaipilot/KwaiCoder-AutoThink-preview",
"base_model:quantized:Kwaipilot/KwaiCoder-AutoThink-preview",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T10:35:08Z | ---
base_model: Kwaipilot/KwaiCoder-AutoThink-preview
language:
- multilingual
library_name: transformers
license: other
license_link: LICENSE
license_name: kwaipilot-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q2_K.gguf) | Q2_K | 15.3 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q3_K_S.gguf) | Q3_K_S | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q3_K_M.gguf) | Q3_K_M | 19.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q3_K_L.gguf) | Q3_K_L | 21.4 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.IQ4_XS.gguf) | IQ4_XS | 22.2 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q4_K_S.gguf) | Q4_K_S | 23.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q4_K_M.gguf) | Q4_K_M | 24.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q5_K_S.gguf) | Q5_K_S | 28.1 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q5_K_M.gguf) | Q5_K_M | 28.9 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q6_K.gguf) | Q6_K | 33.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF/resolve/main/KwaiCoder-AutoThink-preview.Q8_0.gguf) | Q8_0 | 43.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-kejp2 | phospho-app | 2025-06-19T17:36:01Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T17:34:59Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black ring' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/joshvista/PickAndPlace/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Quantumhash/PixelScribe | Quantumhash | 2025-06-19T17:28:21Z | 127 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"Diffusers",
"PixelScribe",
"flux",
"en",
"license:other",
"diffusers:PixelScribePipeline",
"region:us"
] | text-to-image | 2025-04-20T16:45:54Z | ---
language:
- en
license: other
license_name: pixelscribe-non-commercial-license
license_link: LICENSE.md
tags:
- text-to-image
- image-generation
- Diffusers
- PixelScribe
- flux
base_model:
- sbapan41/PixelScribe
new_version: sbapan41/PixelScribe
pipeline_tag: text-to-image
library_name: diffusers
---
<div align="center">
<img src="https://huggingface.co/datasets/Quantamhash/Assets/resolve/main/images/dark_logo.png"
alt="Title card"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
`PixelScribe` is a 12. billion parameter rectified flow transformer capable of generating images from text descriptions.
# Key Features
1. Cutting-edge output quality, second only to our state-of-the-art model `PixelScribe`.
2. Competitive prompt following, matching the performance of closed source alternatives .
3. Trained using guidance distillation, making `PixelScribe` more efficient.
4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.
5. Generated outputs can be used for personal, scientific, and commercial purposes as described
# Usage
We provide a reference implementation of `PixelScribe`, as well as sampling code, in a dedicated [github repository](https://github.com/MotoBwi/PixelScribe.git).
Developers and creatives looking to build on top of `PixelScribe` are encouraged to use this as a starting point.
## Diffusers
To use `PixelScribe` with the ๐งจ diffusers python library, first install or upgrade diffusers
```shell
pip install -U diffusers
```
Then you can use `FluxPipeline` to run the model
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("sbapan41/PixelScribe", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("PixelScribe.png")
```
---
# Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate output that matches the prompts.
- Prompt following is heavily influenced by the prompting-style.
# Out-of-Scope Use
The model and its derivatives may not be used
- In any way that violates any applicable national, federal, state, local or international law or regulation.
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content.
- To generate or disseminate verifiably false information and/or content with the purpose of harming others.
- To generate or disseminate personal identifiable information that can be used to harm an individual.
- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals.
- To create non-consensual nudity or illegal pornographic content.
- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation.
- Generating or facilitating large-scale disinformation campaigns.
# License
This model falls under the [`PixelScribe` Non-Commercial License]. |
JW17/Q3-4B-MOO-b1e1-ckpt500 | JW17 | 2025-06-19T17:24:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T17:22:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmetsezginn/cefr-roberta-irt-hybrid | ahmetsezginn | 2025-06-19T17:21:54Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-06-19T15:32:26Z | ---
{}
---
# CEFR hybrid model
# DistilRoBERTa + IRT โHybridโ CEFR Classifier
*(ahmetsezginn/cefr-roberta-irt-hybrid)*
Fine-tuned on **120 Cambridge reading passages (B1โC2)** and paired with a 2-PL Item-Response-Theory (IRT) engine so that a **10-question mini-test** can output:
* ฮธ (z-score ability)
* IRT-only CEFR level
* Transformer-only CEFR level
* Ensemble โfinalโ level
* Micro-skill strengths / weaknesses (if skills are annotated)
---
## ๐ง Install
```bash
python -m pip install --upgrade \
transformers>=4.42 torch \
numpy scipy huggingface_hub>=0.23
```
---
## ๐ Quick usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import json
from evaluate import evaluate_candidate # comes with this repo
MODEL_ID = "ahmetsezginn/cefr-roberta-irt-hybrid"
tok = AutoTokenizer.from_pretrained(MODEL_ID)
clf = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
clf.eval() # no grad
qbank = json.load(open("irt_qbank.json")) # download once (see below)
# 10 answers: "passageID_questionID" โ "a|b|c|d"
answers = {
"57_3": "b", "57_4": "d", "67_1": "a", "67_2": "c", "67_3": "b",
"81_5": "d", "81_6": "a", "92_2": "c", "104_1": "b", "104_4": "a"
}
result = evaluate_candidate(answers, qbank, tok, clf)
print(json.dumps(result, indent=2))
```
Example output
```json
{
"theta": -0.73,
"irt_level": "B1",
"clf_level": "B2",
"final_level": "B2",
"skills": {
"Inference": {"n":4, "accuracy":0.25, "status":"weak"},
"Detail": {"n":3, "accuracy":0.67, "status":"medium"},
"Vocab": {"n":3, "accuracy":1.00, "status":"strong"}
}
}
```
---
## ๐ฆ Files in this repo
| Path | Description |
|------|-------------|
| `config.json`, `model.safetensors` | DistilRoBERTa classifier (4 labels) |
| `tokenizer_config.json`, `vocab.json`, `merges.txt`, `special_tokens_map.json` | Tokenizer |
| `irt_qbank.json` | 795 MCQ items โ `disc`, `diff`, `answer` (+optional `skill`) |
| `evaluate.py` | Single helper: ฮธ MLE + ensemble logic |
| `README.md` | *(this file)* |
---
## ๐ `irt_qbank.json` schema
```jsonc
{
"57_3": {
"disc": 0.87, // discrimination (a_i)
"diff": 64.1, // raw difficulty (0โ100) โ internally z-scored
"answer": "b",
"skill": "Inference" // optional micro-skill tag
},
"...": { ... }
}
```
> **โ
ฐ Where do IDs come from?**
> Each question is labeled `"<passageID>_<qID>"`, e.g. passage 57 Q3 โ `"57_3"`.
---
## ๐ Running the 10-question mini-test
1. **Take the test** (PDF, notebook, web form) โ note the `ID = option` pairs.
2. Create `answers = { "57_3":"b", ... }` as above.
3. Call `evaluate_candidate(answers, qbank, tok, clf)` โ done!
If you omit `tok`/`clf`, `evaluate_candidate` falls back to **IRT-only** scoring.
---
## โ Adding your own MCQ items
1. Assign a unique `passageID_questionID` string.
2. Collect โฅ 200 responses and calibrate **`disc`** & **`diff`** using an IRT library (e.g. *mirt* / *py-irt*).
3. Append the calibrated item to `irt_qbank.json`.
4. Push the updated JSON to your fork or issue a PR.
*(Un-calibrated items will distort ฮธ estimates.)*
---
## ๐ฅ๏ธ CLI pipeline shortcut (classifier-only)
```bash
python - <<'PY'
from transformers import pipeline
clf = pipeline("text-classification",
model="ahmetsezginn/cefr-roberta-irt-hybrid",
top_k=None)
text = open("sample_passage.txt").read()
print(clf(text)) # e.g. [{'label': 'C1', 'score': 0.72}, ...]
PY
```
---
## โ FAQ
| Question | Answer |
|----------|--------|
| **GPU required?** | No. Model is 82โฏM params; CPU inference โ 1โฏs per passage. |
| **Can I ignore the transformer and use IRT only?** | Yes โ skip `tok`/`clf` arguments. |
| **What if `skill` is missing?** | Those questions appear as `"Unknown"` in the skills report. |
| **Licence?** | Research / CCโBYโNCโฏ4.0 โ see `LICENSE`. |
---
### Citation
If you use this work, please cite the Cambridge corpus and this repository.
---
license: cc-by-nc-4.0
language:
- en
tags:
- text-classification
- CEFR
- IRT
- distilroberta
- reading-comprehension
library_name: transformers
pipeline_tag: text-classification
model_type: roberta
datasets:
- cambridge_english_reading_mcq
metrics:
- accuracy
- macro_f1
pretty_name: DistilRoBERTa + IRT Hybrid CEFR Classifier
--- |
LaaP-ai/donut-base-invoice-v1.20 | LaaP-ai | 2025-06-19T17:17:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-19T06:01:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
brandonbeiler/InternVL3-38B-BNB-8bit | brandonbeiler | 2025-06-19T17:10:14Z | 0 | 0 | null | [
"safetensors",
"internvl_chat",
"fp8",
"quantization",
"dynamic",
"vision-language",
"multimodal",
"vllm",
"llm-compressor",
"internvl3",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-06-19T16:48:17Z | ---
language:
- en
- zh
tags:
- fp8
- quantization
- dynamic
- vision-language
- multimodal
- vllm
- llm-compressor
- internvl3
pipeline_tag: image-text-to-text
inference: false
license: mit
---
# ๐ฅ InternVL3-38B-FP8-Dynamic: Optimized Vision-Language Model ๐ฅ
This is a **FP8 dynamic quantized** version of [OpenGVLab/InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B), optimized for high-performance inference with vLLM.
The model utilizes **dynamic FP8 quantization** for optimal ease of use and deployment, achieving significant speedup with minimal accuracy degradation on vision-language tasks.
## ๐ Key Features
- **FP8 Dynamic Quantization**: No calibration required, ready to use immediately
- **Vision-Language Optimized**: Specialized quantization recipe that preserves visual understanding
- **vLLM Ready**: Seamless integration with vLLM for production deployment
- **Memory Efficient**: ~50% memory reduction compared to FP16 original
- **Performance Boost**: Significant faster inference on H100/L40S GPUs
- **Easy Deployment**: No calibration dataset needed for quantization
## ๐ Model Details
- **Original Model**: [OpenGVLab/InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B)
- **Source Model**: OpenGVLab/InternVL3-38B
- **Quantized Model**: InternVL3-38B-BNB-8bit
- **Quantization Method**: FP8 Dynamic (W8A8)
- **Quantization Library**: [LLM Compressor](https://github.com/vllm-project/llm-compressor) v0.5.2.dev112+g6800f811
- **Quantized by**: [brandonbeiler](https://huggingface.co/brandonbeiler)
## ๐ง Usage
### With vLLM (Recommended)
```python
from vllm import LLM, SamplingParams
# Load the quantized model
model = LLM(
model="brandonbeiler/InternVL3-38B-BNB-8bit",
trust_remote_code=True,
max_model_len=8192,
tensor_parallel_size=1, # Adjust based on your GPU setup
)
# Generate response
sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
response = model.generate("Describe this image: <image>", sampling_params)
print(response[0].outputs[0].text)
```
## ๐๏ธ Technical Specifications
### Hardware Requirements
- **Inference**: ? VRAM
- **Supported GPUs**: H100, L40S, A100 (80GB), RTX 4090 (2x for tensor parallelism)
- **GPU Architecture**: Ada Lovelace, Hopper (for optimal FP8 performance)
### Quantization Details
- **Weights**: FP8 E4M3 with dynamic per-tensor scales
- **Activations**: FP8 E4M3 with dynamic per-tensor scales
- **Preserved Components**: Vision tower, embeddings, normalization layers, mlp1
## ๐ฌ Package Versions
This model was created using:
```
llmcompressor==0.5.2.dev112+g6800f811
compressed-tensors==latest
transformers==4.52.4
torch==2.7.0
vllm==0.9.1
```
*Quantized with โค๏ธ using LLM Compressor for the open-source community*
|
csikasote/mms-1b-all-nyagen-combined-62 | csikasote | 2025-06-19T17:05:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"nyagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-19T15:14:40Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- nyagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-nyagen-combined-62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-nyagen-combined-62
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the NYAGEN - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3739
- Wer: 0.2784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 62
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.2693 | 0.5025 | 100 | 0.8927 | 0.7033 |
| 0.41 | 1.0050 | 200 | 0.4877 | 0.4012 |
| 0.3253 | 1.5075 | 300 | 0.4483 | 0.3627 |
| 0.2977 | 2.0101 | 400 | 0.4427 | 0.3408 |
| 0.2816 | 2.5126 | 500 | 0.4403 | 0.3338 |
| 0.2688 | 3.0151 | 600 | 0.4096 | 0.3292 |
| 0.2611 | 3.5176 | 700 | 0.4090 | 0.3183 |
| 0.2464 | 4.0201 | 800 | 0.4096 | 0.3148 |
| 0.2431 | 4.5226 | 900 | 0.4045 | 0.3023 |
| 0.2378 | 5.0251 | 1000 | 0.3950 | 0.2938 |
| 0.2283 | 5.5276 | 1100 | 0.4060 | 0.2913 |
| 0.2327 | 6.0302 | 1200 | 0.3869 | 0.2879 |
| 0.2232 | 6.5327 | 1300 | 0.3884 | 0.2933 |
| 0.2216 | 7.0352 | 1400 | 0.3857 | 0.2930 |
| 0.2093 | 7.5377 | 1500 | 0.3835 | 0.2818 |
| 0.2105 | 8.0402 | 1600 | 0.3739 | 0.2786 |
| 0.2057 | 8.5427 | 1700 | 0.3741 | 0.2817 |
| 0.2007 | 9.0452 | 1800 | 0.3712 | 0.2753 |
| 0.2029 | 9.5477 | 1900 | 0.3743 | 0.2779 |
| 0.2023 | 10.0503 | 2000 | 0.3778 | 0.2695 |
| 0.1908 | 10.5528 | 2100 | 0.3819 | 0.2659 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Official-Tv-mezzo-fun-17-19-Viral-videos1/Original.FULL.VIDEO.LINK.mezzo.fun.Viral.Video.Leaks.Tutorial.Official | Official-Tv-mezzo-fun-17-19-Viral-videos1 | 2025-06-19T17:00:17Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:59:45Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
|
kchatr/british_orpheus_lora_adapters | kchatr | 2025-06-19T16:58:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:57:54Z | ---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kchatr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
drl-robo/ppo-SnowballTarget | drl-robo | 2025-06-19T16:57:29Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-19T16:57:26Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: drl-robo/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
SidMcStarter/legal-llama3-lora-model | SidMcStarter | 2025-06-19T16:55:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:54:22Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SidMcStarter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb | BootesVoid | 2025-06-19T16:53:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T16:53:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ROSE1
---
# Cmc3D1Faf001Fnx8D5X3L2V42_Cmc3Klurq00O3Nx8D09Rr3Wcb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ROSE1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ROSE1",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb', weight_name='lora.safetensors')
image = pipeline('ROSE1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb/discussions) to add images that show off what youโve made with this LoRA.
|
New-tutorial-kamal-Kaur-18-Go-videos/Original.FULL.VIDEO.kamal.Kaur.viral.video.Link.viral.On.Social.Media.Official | New-tutorial-kamal-Kaur-18-Go-videos | 2025-06-19T16:53:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:53:08Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
|
ekiprop/roberta-sst2-lora-ep20-lr0p0003-bs32-2025-06-19-1623 | ekiprop | 2025-06-19T16:44:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-19T16:23:09Z | ---
library_name: peft
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-sst2-lora-ep20-lr0p0003-bs32-2025-06-19-1623
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-sst2-lora-ep20-lr0p0003-bs32-2025-06-19-1623
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2598
- Accuracy: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.2283 | 0.2375 | 500 | 0.1945 | 0.9358 |
| 0.2055 | 0.4751 | 1000 | 0.1977 | 0.9381 |
| 0.2075 | 0.7126 | 1500 | 0.2174 | 0.9312 |
| 0.1802 | 0.9501 | 2000 | 0.2223 | 0.9300 |
| 0.1958 | 1.1876 | 2500 | 0.2189 | 0.9381 |
| 0.1886 | 1.4252 | 3000 | 0.2290 | 0.9312 |
| 0.1782 | 1.6627 | 3500 | 0.2375 | 0.9255 |
| 0.1999 | 1.9002 | 4000 | 0.1876 | 0.9312 |
| 0.1554 | 2.1378 | 4500 | 0.2139 | 0.9369 |
| 0.1616 | 2.3753 | 5000 | 0.1923 | 0.9358 |
| 0.1808 | 2.6128 | 5500 | 0.2049 | 0.9266 |
| 0.1515 | 2.8504 | 6000 | 0.1998 | 0.9358 |
| 0.1315 | 3.0879 | 6500 | 0.2583 | 0.9358 |
| 0.1508 | 3.3254 | 7000 | 0.1903 | 0.9404 |
| 0.1571 | 3.5629 | 7500 | 0.2312 | 0.9404 |
| 0.1583 | 3.8005 | 8000 | 0.2228 | 0.9404 |
| 0.1378 | 4.0380 | 8500 | 0.2276 | 0.9438 |
| 0.1449 | 4.2755 | 9000 | 0.2458 | 0.9358 |
| 0.1324 | 4.5131 | 9500 | 0.2719 | 0.9427 |
| 0.149 | 4.7506 | 10000 | 0.2386 | 0.9358 |
| 0.131 | 4.9881 | 10500 | 0.2376 | 0.9323 |
| 0.1354 | 5.2257 | 11000 | 0.2490 | 0.9335 |
| 0.137 | 5.4632 | 11500 | 0.2464 | 0.9358 |
| 0.1325 | 5.7007 | 12000 | 0.2200 | 0.9323 |
| 0.1419 | 5.9382 | 12500 | 0.2354 | 0.9300 |
| 0.1091 | 6.1758 | 13000 | 0.2729 | 0.9335 |
| 0.124 | 6.4133 | 13500 | 0.2453 | 0.9335 |
| 0.1233 | 6.6508 | 14000 | 0.2456 | 0.9392 |
| 0.1145 | 6.8884 | 14500 | 0.2906 | 0.9266 |
| 0.1195 | 7.1259 | 15000 | 0.2327 | 0.9381 |
| 0.1094 | 7.3634 | 15500 | 0.2616 | 0.9369 |
| 0.1096 | 7.6010 | 16000 | 0.2598 | 0.9369 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1 |
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-42-2025-06-19 | morturr | 2025-06-19T16:44:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T16:43:49Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
videos-18-matt-kervi-javier/FULL.VIDEO.Matt.Kervi.Javier.Isaac.xyn.Viral.Video.Link.Tutorial.Official | videos-18-matt-kervi-javier | 2025-06-19T16:43:48Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:43:41Z | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Matt+Kervi+Javier+Isaac">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Matt+Kervi+Javier+Isaac">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Matt+Kervi+Javier+Isaac"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
Timia123/inpo_iter3_jun19 | Timia123 | 2025-06-19T16:36:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T16:27:28Z | ---
license: apache-2.0
---
|
sgonzalezygil/sd-finetuning-dreambooth-v19-1600 | sgonzalezygil | 2025-06-19T16:34:50Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-19T16:33:18Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐งจ diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rahmaa33/Qwin2-0.5QAsRamsis | Rahmaa33 | 2025-06-19T16:33:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-19T16:33:29Z | ---
base_model: unsloth/qwen2-0.5b-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
andreagasparini/ModernBERT-base-multilingual-stress | andreagasparini | 2025-06-19T16:33:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:clapAI/modernBERT-base-multilingual-sentiment",
"base_model:finetune:clapAI/modernBERT-base-multilingual-sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T16:32:59Z | ---
library_name: transformers
license: apache-2.0
base_model: clapAI/modernBERT-base-multilingual-sentiment
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [clapAI/modernBERT-base-multilingual-sentiment](https://huggingface.co/clapAI/modernBERT-base-multilingual-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1349
- F1: 0.8234
- F1 Macro: 0.8227
- F1 Micro: 0.8239
- Precision: 0.8250
- Precision Macro: 0.8257
- Precision Micro: 0.8239
- Recall: 0.8239
- Recall Macro: 0.8218
- Recall Micro: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | F1 Macro | F1 Micro | Precision | Precision Macro | Precision Micro | Recall | Recall Macro | Recall Micro |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:--------:|:---------:|:---------------:|:---------------:|:------:|:------------:|:------------:|
| No log | 1.0 | 160 | 0.3541 | 0.8522 | 0.8520 | 0.8521 | 0.8536 | 0.8523 | 0.8521 | 0.8521 | 0.8531 | 0.8521 |
| No log | 2.0 | 320 | 0.6458 | 0.8485 | 0.8480 | 0.8486 | 0.8486 | 0.8486 | 0.8486 | 0.8486 | 0.8477 | 0.8486 |
| No log | 3.0 | 480 | 1.1349 | 0.8234 | 0.8227 | 0.8239 | 0.8250 | 0.8257 | 0.8239 | 0.8239 | 0.8218 | 0.8239 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.0+cu124
- Datasets 2.16.1
- Tokenizers 0.21.1
|
Subsets and Splits