modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Mostefa-Terbeche/diabetic-retinopathy-ddr-efficientnet_b3-advanced-20250723-013056
|
Mostefa-Terbeche
| 2025-08-19T04:52:46Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:ddr",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T04:31:31Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- ddr
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: ddr_efficientnet_b3_advanced
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: ddr
name: DDR
metrics:
- type: accuracy
value: 0.7671439336850038
- type: quadratic-kappa
value: 0.8464516881623221
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the efficientnet_b3 architecture on the ddr dataset with advanced preprocessing.
## Model Details
- **Architecture**: efficientnet_b3
- **Dataset**: ddr
- **Preprocessing**: advanced
- **Training Date**: 20250723-013056
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: ddr_efficientnet_b3_20250723-013056_new
## Performance
- **Test Accuracy**: 0.7671439336850038
- **Test Quadratic Kappa**: 0.8464516881623221
- **Validation Kappa**: 0.8464516881623221
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-ddr-efficientnet_b3-advanced",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755577510
|
pempekmangedd
| 2025-08-19T04:52:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:52:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dongguanting/Qwen2.5-7B-ARPO
|
dongguanting
| 2025-08-19T04:52:36Z | 56 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:dongguanting/ARPO-SFT-54K",
"dataset:dongguanting/ARPO-RL-Reasoning-10K",
"dataset:dongguanting/ARPO-RL-DeepSearch-1K",
"arxiv:2507.19849",
"arxiv:2505.16410",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-24T12:38:38Z |
---
base_model:
- Qwen/Qwen2.5-3B-Instruct
- Qwen/Qwen2.5-7B-Instruct
- meta-llama/Llama-3.1-8B-Instruct
- Qwen/Qwen3-8B-Instruct
- Qwen/Qwen3-14B-Instruct
datasets:
- dongguanting/ARPO-SFT-54K
- dongguanting/ARPO-RL-Reasoning-10K
- dongguanting/ARPO-RL-DeepSearch-1K
language: en
library_name: transformers
license: mit
pipeline_tag: text-generation
---
# Agentic Reinforced Policy Optimization (ARPO)
<p align="center">
<img src="https://github.com/dongguanting/ARPO/blob/main/logo1.png" width="150px">
</p>
This repository contains a model checkpoint for **Agentic Reinforced Policy Optimization (ARPO)**, a novel agentic Reinforcement Learning (RL) algorithm designed for training multi-turn Large Language Model (LLM)-based agents.
The model was presented in the paper [Agentic Reinforced Policy Optimization](https://huggingface.co/papers/2507.19849) (arXiv: [2507.19849](https://arxiv.org/abs/2507.19849)).
## ✨ Overview
ARPO addresses the challenge of inadequately balancing LLMs' intrinsic long-horizon reasoning capabilities and their proficiency in multi-turn tool interactions. Through preliminary experiments, it was observed that LLMs tend to exhibit highly uncertain behavior, characterized by an increase in the entropy distribution of generated tokens, immediately following interactions with external tools. Motivated by this observation, ARPO incorporates an entropy-based adaptive rollout mechanism, dynamically balancing global trajectory sampling and step-level sampling, thereby promoting exploration at steps with high uncertainty after tool usage.
By integrating an advantage attribution estimation, ARPO enables LLMs to internalize advantage differences in stepwise tool-use interactions. Experiments across 13 challenging benchmarks in computational reasoning, knowledge reasoning, and deep search domains demonstrate ARPO's superiority over trajectory-level RL algorithms. Notably, ARPO achieves improved performance using only half of the tool-use budget required by existing methods, offering a scalable solution for aligning LLM-based agents with real-time dynamic environments.
<p align="center">
<img width="1686" height="866" alt="intro" src="https://github.com/user-attachments/assets/8b9daf54-c4ba-4e79-bf79-f98b5a893edd" />
</p>
* In the figure (left), the initial tokens generated by the LLM after receiving each round of tool-call feedback consistently exhibit a high entropy. This indicates that external tool-call significantly introduces uncertainty into the LLM’s reasoning process.
* In the figure (right), ARPO's performance is validated across 13 datasets. Notably, Qwen3-14B with ARPO excelled in Pass@5, achieving 61.2% on GAIA and 24.0% on HLE, while requiring only about half the tool calls compared to GRPO during training.
## 📣 Latest News
* **[July 29, 2025]**: 📄 Our paper is now available on **[arXiv](https://arxiv.org/abs/2507.19849)** and **[Hugging Face](https://huggingface.co/papers/2507.19849)** daily paper.
* **[July 25, 2025]**: 🔥 We released all our **ARPO model checkpoints (3B~14B)** and **datasets (SFT, RL, Evaluation)**. Checkout **[🤗ARPO Collection](https://huggingface.co/collections/dongguanting/arpo-688229ff8a6143fe5b4ad8ae)** here. We will keep update it!
* **[July 25, 2025]**: 🚀 Full codebase released. ARPO supports multi-tool agentic RL training for the Qwen2.5, 3 and Llama3 models. We have implemented extensive tool-call acceleration and memory optimization during RL training.
## 🔗 Links
* **Paper (Hugging Face)**: [Agentic Reinforced Policy Optimization](https://huggingface.co/papers/2507.19849)
* **Paper (arXiv)**: [https://arxiv.org/abs/2507.19849](https://arxiv.org/abs/2507.19849)
* **GitHub Repository**: [https://github.com/dongguanting/ARPO](https://github.com/dongguanting/ARPO)
* **Hugging Face Model Collection**: [ARPO Models](https://huggingface.co/collections/dongguanting/arpo-688229ff8a6143fe5b4ad8ae)
* **Hugging Face Dataset Collection**: [ARPO Datasets](https://huggingface.co/collections/dongguanting/arpo-688229ff8a6143fe5b4ad8ae)
## ⚡ Quick Start
This model can be loaded and used with the `transformers` library. Below is a basic example for text generation and multi-turn interaction. For more advanced usage, including multi-tool agentic RL training and evaluation, please refer to the [official GitHub repository](https://github.com/dongguanting/ARPO).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
# Replace "dongguanting/Qwen3-8B-ARPO-DeepSearch" with the specific model ID you want to use
model_id = "dongguanting/Qwen3-8B-ARPO-DeepSearch" # Example from the ARPO collection
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # Adjust dtype based on model requirements and hardware
device_map="auto", # Automatically maps the model to available devices (e.g., GPU)
trust_remote_code=True,
)
# Prepare your conversational input
# The model supports multi-turn interactions and tool calls through its chat template.
messages = [
{"role": "user", "content": "What is the capital of France? And what is the population of that city?"},
]
# Apply the chat template and tokenize
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
# Generate a response
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.6,
top_p=0.95,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>")]
)
# Decode and print the generated text
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
```
## 📄 Citation
If you find this work helpful, please cite our paper:
```bibtex
@misc{dong2025arpo,
title={Agentic Reinforced Policy Optimization},
author={Guanting Dong and Hangyu Mao and Kai Ma and Licheng Bao and Yifei Chen and Zhongyuan Wang and Zhongxia Chen and Jiazhen Du and Huiyang Wang and Fuzheng Zhang and Guorui Zhou and Yutao Zhu and Ji-Rong Wen and Zhicheng Dou},
year={2025},
eprint={2507.19849},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.19849},
}
@article{dong2025toolstar,
author = {Guanting Dong and
Yifei Chen and
Xiaoxi Li and
Jiajie Jin and
Hongjin Qian and
Yutao Zhu and
Hangyu Mao and
Guorui Zhou and
Zhicheng Dou and
Ji{-}Rong Wen},
title = {Tool-Star: Empowering LLM-Brained Multi-Tool Reasoner via Reinforcement
Learning},
journal = {CoRR},
volume = {abs/2505.16410},
year = {2025},
url = {https://doi.org/10.48550/arXiv.2505.16410},
doi = {10.48550/ARXIV.2505.16410},
eprinttype = {arXiv},
eprint = {2505.16410},
timestamp = {Thu, 26 Jun 2025 07:49:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2505-16410.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## 🤝 Acknowledgements
This training implementation builds upon [Tool-Star](https://github.com/dongguanting/Tool-Star), [Llama Factory](https://github.com/hiyouga/LLaMA-Factory), [verl](https://github.com/volcengine/verl) and [ReCall](https://github.com/Agent-RL/ReCall). For evaluation, we rely on [WebThinker](https://github.com/RUC-NLPIR/WebThinker), [HIRA](https://github.com/RUC-NLPIR/HiRA), [WebSailor](https://github.com/Alibaba-NLP/WebAgent), [Search-o1](https://github.com/sunnynexus/Search-o1), and [FlashRAG](https://github.com/RUC-NLPIR/FlashRAG). The Python interpreter design references [ToRA](https://github.com/microsoft/ToRA) and [ToRL](https://github.com/GAIR-NLP/ToRL), while our models are trained using [Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/). We express our sincere gratitude to these projects for their invaluable contributions to the open-source community.
## 📄 License
This project is released under the [MIT License](https://opensource.org/licenses/MIT).
## 📞 Contact
For any questions or feedback, please reach out to us at [[email protected]]([email protected]).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755577603
|
sampingkaca72
| 2025-08-19T04:52:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:52:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_mis_run2_gen3_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-08-19T04:51:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:51:09Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF
|
mradermacher
| 2025-08-19T04:49:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:hdl2v/qwen2.5-coder-32b-single-verilog",
"base_model:quantized:hdl2v/qwen2.5-coder-32b-single-verilog",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-19T01:43:26Z |
---
base_model: hdl2v/qwen2.5-coder-32b-single-verilog
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/hdl2v/qwen2.5-coder-32b-single-verilog
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#qwen2.5-coder-32b-single-verilog-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bonnieliu2002/act_collect_empty_bottle_black_white_wrist_2k_bs8
|
bonnieliu2002
| 2025-08-19T04:48:09Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:bonnieliu2002/collect_empty_bottle_black_white_wrist",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T04:47:42Z |
---
datasets: bonnieliu2002/collect_empty_bottle_black_white_wrist
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
deeee112222/mistral7b_cwe_expert_adapter
|
deeee112222
| 2025-08-19T04:47:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:47:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnonymousCS/xlmr_spanish_immigration1
|
AnonymousCS
| 2025-08-19T04:46:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T04:39:51Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_spanish_immigration1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_spanish_immigration1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3031
- Accuracy: 0.8846
- 1-f1: 0.8148
- 1-recall: 0.7674
- 1-precision: 0.8684
- Balanced Acc: 0.8550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.5731 | 1.0 | 5 | 0.5155 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.4482 | 2.0 | 10 | 0.3814 | 0.8769 | 0.7838 | 0.6744 | 0.9355 | 0.8257 |
| 0.3285 | 3.0 | 15 | 0.2765 | 0.9077 | 0.85 | 0.7907 | 0.9189 | 0.8781 |
| 0.1952 | 4.0 | 20 | 0.2413 | 0.9154 | 0.8706 | 0.8605 | 0.8810 | 0.9015 |
| 0.1912 | 5.0 | 25 | 0.2758 | 0.9077 | 0.8571 | 0.8372 | 0.8780 | 0.8899 |
| 0.1327 | 6.0 | 30 | 0.3031 | 0.8846 | 0.8148 | 0.7674 | 0.8684 | 0.8550 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
catme0w/MolScribe-Occlusion
|
catme0w
| 2025-08-19T04:46:13Z | 0 | 0 | null |
[
"base_model:yujieq/MolScribe",
"base_model:finetune:yujieq/MolScribe",
"license:mit",
"region:us"
] | null | 2025-08-18T04:26:25Z |
---
license: mit
base_model:
- yujieq/MolScribe
new_version: catme0w/MolScribe-Long
---
|
akaredmiru/blockassist-bc-stealthy_diving_macaque_1755578619
|
akaredmiru
| 2025-08-19T04:45:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy diving macaque",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:44:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy diving macaque
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmehzn3o40q9srts86grevzph_cmei163280qdgrts8jzj6rm0y
|
BootesVoid
| 2025-08-19T04:44:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T04:44:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMOTIONAL
---
# Cmehzn3O40Q9Srts86Grevzph_Cmei163280Qdgrts8Jzj6Rm0Y
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMOTIONAL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMOTIONAL",
"lora_weights": "https://huggingface.co/BootesVoid/cmehzn3o40q9srts86grevzph_cmei163280qdgrts8jzj6rm0y/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmehzn3o40q9srts86grevzph_cmei163280qdgrts8jzj6rm0y', weight_name='lora.safetensors')
image = pipeline('EMOTIONAL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmehzn3o40q9srts86grevzph_cmei163280qdgrts8jzj6rm0y/discussions) to add images that show off what you’ve made with this LoRA.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755578448
|
IvanJAjebu
| 2025-08-19T04:42:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:42:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kurosawama/Llama-3.1-8B-Translation-align
|
Kurosawama
| 2025-08-19T04:40:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:40:41Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmehzn3o40q9srts86grevzph_cmehzzmcn0qaprts88x3qkax0
|
BootesVoid
| 2025-08-19T04:37:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T04:37:32Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PSYCOLOGY
---
# Cmehzn3O40Q9Srts86Grevzph_Cmehzzmcn0Qaprts88X3Qkax0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PSYCOLOGY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "PSYCOLOGY",
"lora_weights": "https://huggingface.co/BootesVoid/cmehzn3o40q9srts86grevzph_cmehzzmcn0qaprts88x3qkax0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmehzn3o40q9srts86grevzph_cmehzzmcn0qaprts88x3qkax0', weight_name='lora.safetensors')
image = pipeline('PSYCOLOGY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmehzn3o40q9srts86grevzph_cmehzzmcn0qaprts88x3qkax0/discussions) to add images that show off what you’ve made with this LoRA.
|
subsectmusic/qwriko-420-checkpoint-100
|
subsectmusic
| 2025-08-19T04:36:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"region:us"
] | null | 2025-08-19T04:36:00Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
library_name: peft
tags:
- base_model:adapter:unsloth/Qwen3-4B-Instruct-2507
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
koloni/blockassist-bc-deadly_graceful_stingray_1755576508
|
koloni
| 2025-08-19T04:35:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:35:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kimxxxx/mistral_r16_a32_b8_gas2_lr5e-5_4500tk_2epoch_test
|
kimxxxx
| 2025-08-19T04:35:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:35:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taochengfei/llama-3.2-3b-it-beta_assistant_v0.2_merged
|
taochengfei
| 2025-08-19T04:34:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T04:30:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
REEA-GLOBAL/Qwen2.5-VL-7B-Instruct-ft-20250819025724539
|
REEA-GLOBAL
| 2025-08-19T04:34:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-19T04:27:24Z |
---
base_model: unsloth/Qwen2.5-VL-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** REEA-GLOBAL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755577896
|
IvanJAjebu
| 2025-08-19T04:32:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:32:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
meandyou200175/e5-query-sql
|
meandyou200175
| 2025-08-19T04:30:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10494",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T00:28:36Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10494
- loss:MultipleNegativesRankingLoss
base_model: vinai/phobert-base-v2
widget:
- source_sentence: tôi cần bếp từ công suất lớn hơn 1870W
sentences:
- 'Bếp điện từ Sunhouse SHD6862, 980W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bếp điện từ Sunhouse SHD6862, 1500W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bếp điện từ Sunhouse SHD6862, 1400W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bếp điện từ Sunhouse SHD6862, 1100W, mặt kính chịu lực, Giá: 2.200.000'
- 'Màn hình Dell UltraSharp U2720Q, 27", 4K UHD, Giá: 11.500.000'
- 'Máy xay sinh tố cầm tay Braun MQ520, công suất 600W, 2 cối, Giá: 880.000'
- 'Bếp điện từ Sunhouse SHD6862, 1300W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bình nóng lạnh Ariston Andris2 15L, Công suất 2500W, Giá: 2.600.000'
- 'Loa Bluetooth Sony SRS-XP500, công suất 120W, pin 20h, Giá: 5.800.000'
- 'Bếp điện từ Sunhouse SHD6862, 1600W, mặt kính chịu lực, Giá: 2.200.000'
- 'Xe máy điện VinFast Theon S, Tốc độ tối đa 90km/h, Pin 3500W, Giá: 63.000.000'
- 'Máy ảnh Canon PowerShot G7X Mark III, Trọng lượng 300g, 20.1MP, Giá: 15.500.000'
- 'Bếp điện từ Sunhouse SHD6862, 1000W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bếp điện từ Sunhouse SHD6862, 1200W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bếp điện từ Sunhouse SHD6862, 1800W, mặt kính chịu lực, Giá: 2.200.000'
- 'Bếp điện từ Sunhouse SHD6862, 2200W, mặt kính chịu lực, Giá: 2.200.000'
- source_sentence: có tủ lạnh dung tích ít nhất 319 lít
sentences:
- 'Bàn là hơi nước Philips GC2990, công suất 1800W, Giá: 680.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 180L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 160L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 200L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Xe điện Anbico AP1500, Tốc độ 45km/h, Nặng 48kg, Pin Lithium, Giá: 14.200.000'
- 'Sofa nỉ cao cấp nhập khẩu, dài 2.1m, khung gỗ sồi, Giá: 14.800.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 250L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 230L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 456L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Máy xay Philips HR2115, công suất 515W, cối 1.5L, Giá: 1.203.000'
- 'Camera hành trình Vietmap C65, Độ phân giải 4K, Góc rộng 170°, Giá: 3.800.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 240L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 290L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 170L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Tủ lạnh Hitachi Inverter R-FVX450PGV9, Dung tích 300L, Ngăn đá trên, Công nghệ
Inverter, Giá: 14.900.000'
- 'Laptop HP Pavilion 15, RAM 16GB, SSD 512GB, Intel Core i5, Giá: 18.900.000'
- source_sentence: mình cần máy bơm nước công suất tối thiểu 100W
sentences:
- 'Tủ lạnh Sanaky 250L, 2 ngăn đông - mát, Inverter tiết kiệm điện, Giá: 7.800.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 93.2W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 80.5W, Đẩy cao 32m, Giá: 2.500.000'
- 'Xe điện Giant Momentum iLike, Tốc độ 45km/h, Nặng 52kg, Pin Lithium, Giá: 18.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 250W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 90.7W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 94.2W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 59.7W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 76.4W, Đẩy cao 32m, Giá: 2.500.000'
- 'Smartwatch Huawei Watch GT 3, Màn hình AMOLED 1.6", Pin 14 ngày, Giá: 5.500.000'
- 'Tủ lạnh Samsung Inverter RT35K5982SL, Dung tích 360L, Công nghệ Twin Cooling
Plus, Giá: 10.900.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 85.0W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy sấy tóc Panasonic EH-ND11, 1800W, chế độ ion âm, Giá: 780.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 63.4W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 62.9W, Đẩy cao 32m, Giá: 2.500.000'
- 'Máy bơm Panasonic GP-250JXK, Công suất 95.0W, Đẩy cao 32m, Giá: 2.500.000'
- source_sentence: mình cần máy ảnh nặng nhỏ hơn 684g
sentences:
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 920g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 740g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 580g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Kem dưỡng ẩm Hada Labo Gokujyun 50g, Chứa hyaluronic acid, Không hương liệu,
Dành cho da khô, Xuất xứ: Nhật Bản, Giá: 295.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 980g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 910g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 960g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 930g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 890g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Tủ lạnh LG Inverter GR-B22FTL, Dung tích 450L, Công nghệ Linear Cooling, Giá:
13.500.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 940g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Bàn phím Akko 3068B Plus, Switch Akko V3, Tuổi thọ 60 triệu lần nhấn, Giá: 2.200.000'
- 'Tủ lạnh Toshiba Inverter GR-RT500, Dung tích 500L, Ngăn đông lớn, Giá: 12.900.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 1000g, Lens kit 15-45mm, Giá: 17.800.000'
- 'Máy ảnh Canon EOS M50 Mark II, 24.1MP, Nặng 990g, Lens kit 15-45mm, Giá: 17.800.000'
- Ổ cắm thông minh Xiaomi Mi Power Strip, chịu tải 2500W, 6 ổ cắm, WiFi, Giá 390.000
- source_sentence: Cho mình ghế massage Daikiosan công suất nhỏ hơn 158W có không
ạ?
sentences:
- 'Smartphone Xiaomi Redmi Note 12, RAM 6GB, ROM 128GB, pin 5000mAh, Giá: 4.790.000'
- 'Xe côn tay Yamaha TFX 150, Động cơ 155cc, Phun xăng điện tử, Giá: 55.000.000'
- 'Ghế massage Daikiosan DK-180, công suất 180W, nhiều chế độ, Giá: 3.950.000'
- 'Ghế massage Daikiosan DK-200, công suất 140W, nhiều chế độ, Giá: 4.250.000'
- 'Ghế massage Daikiosan DK-180, công suất 230W, nhiều chế độ, Giá: 3.950.000'
- 'Loa Sony SRS-XB23, công suất 25W, pin 12h, chống nước IP67, Giá: 3.200.000'
- 'Son dưỡng môi Vaseline Lip Therapy, Trọng lượng: 7g, Thành phần: petroleum jelly,
Mùi: hoa hồng, Xuất xứ: Mỹ, Giá: 95.000'
- 'Ghế massage Daikiosan DK-180, công suất 220W, nhiều chế độ, Giá: 3.950.000'
- 'Ghế massage Daikiosan DK-180, công suất 240W, nhiều chế độ, Giá: 3.950.000'
- 'Ghế massage Daikiosan DK-180, công suất 210W, nhiều chế độ, Giá: 3.950.000'
- 'Ghế massage Daikiosan DK-180, công suất 170W, nhiều chế độ, Giá: 3.950.000'
- 'Ghế massage Daikiosan DK-180, công suất 200W, nhiều chế độ, Giá: 3.950.000'
- 'Robot hút bụi Ecovacs Deebot N79S, lực hút 3000Pa, pin 5200mAh, Giá: 7.250.000'
- 'Ghế massage Daikiosan DK-180, công suất 120W, nhiều chế độ, Giá: 3.950.000'
- 'Máy giặt Electrolux Inverter EWF9024BDWB, 9kg, Công nghệ UltraMix, Giá: 10.500.000'
- 'Loa Soundbar Samsung HW-T420, công suất 120W, bluetooth, Giá: 2.450.000'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@2
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_accuracy@100
- cosine_precision@1
- cosine_precision@2
- cosine_precision@5
- cosine_precision@10
- cosine_precision@100
- cosine_recall@1
- cosine_recall@2
- cosine_recall@5
- cosine_recall@10
- cosine_recall@100
- cosine_ndcg@10
- cosine_mrr@1
- cosine_mrr@2
- cosine_mrr@5
- cosine_mrr@10
- cosine_mrr@100
- cosine_map@100
model-index:
- name: SentenceTransformer based on vinai/phobert-base-v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.5257289879931389
name: Cosine Accuracy@1
- type: cosine_accuracy@2
value: 0.7530017152658662
name: Cosine Accuracy@2
- type: cosine_accuracy@5
value: 0.9562607204116638
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9931389365351629
name: Cosine Accuracy@10
- type: cosine_accuracy@100
value: 0.9965694682675815
name: Cosine Accuracy@100
- type: cosine_precision@1
value: 0.5257289879931389
name: Cosine Precision@1
- type: cosine_precision@2
value: 0.3765008576329331
name: Cosine Precision@2
- type: cosine_precision@5
value: 0.19125214408233276
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0993138936535163
name: Cosine Precision@10
- type: cosine_precision@100
value: 0.009965694682675816
name: Cosine Precision@100
- type: cosine_recall@1
value: 0.5257289879931389
name: Cosine Recall@1
- type: cosine_recall@2
value: 0.7530017152658662
name: Cosine Recall@2
- type: cosine_recall@5
value: 0.9562607204116638
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9931389365351629
name: Cosine Recall@10
- type: cosine_recall@100
value: 0.9965694682675815
name: Cosine Recall@100
- type: cosine_ndcg@10
value: 0.776023945219392
name: Cosine Ndcg@10
- type: cosine_mrr@1
value: 0.5257289879931389
name: Cosine Mrr@1
- type: cosine_mrr@2
value: 0.6393653516295026
name: Cosine Mrr@2
- type: cosine_mrr@5
value: 0.6983704974271029
name: Cosine Mrr@5
- type: cosine_mrr@10
value: 0.7039832285115316
name: Cosine Mrr@10
- type: cosine_mrr@100
value: 0.7042308189713864
name: Cosine Mrr@100
- type: cosine_map@100
value: 0.704230818971385
name: Cosine Map@100
---
# SentenceTransformer based on vinai/phobert-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) <!-- at revision e2375d266bdf39c6e8e9a87af16a5da3190b0cc8 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("meandyou200175/e5-query-sql")
# Run inference
sentences = [
'Cho mình ghế massage Daikiosan công suất nhỏ hơn 158W có không ạ?',
'Ghế massage Daikiosan DK-180, công suất 120W, nhiều chế độ, Giá: 3.950.000',
'Ghế massage Daikiosan DK-180, công suất 210W, nhiều chế độ, Giá: 3.950.000',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:---------------------|:----------|
| cosine_accuracy@1 | 0.5257 |
| cosine_accuracy@2 | 0.753 |
| cosine_accuracy@5 | 0.9563 |
| cosine_accuracy@10 | 0.9931 |
| cosine_accuracy@100 | 0.9966 |
| cosine_precision@1 | 0.5257 |
| cosine_precision@2 | 0.3765 |
| cosine_precision@5 | 0.1913 |
| cosine_precision@10 | 0.0993 |
| cosine_precision@100 | 0.01 |
| cosine_recall@1 | 0.5257 |
| cosine_recall@2 | 0.753 |
| cosine_recall@5 | 0.9563 |
| cosine_recall@10 | 0.9931 |
| cosine_recall@100 | 0.9966 |
| **cosine_ndcg@10** | **0.776** |
| cosine_mrr@1 | 0.5257 |
| cosine_mrr@2 | 0.6394 |
| cosine_mrr@5 | 0.6984 |
| cosine_mrr@10 | 0.704 |
| cosine_mrr@100 | 0.7042 |
| cosine_map@100 | 0.7042 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,494 training samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, <code>negative_5</code>, <code>negative_6</code>, <code>negative_7</code>, <code>negative_8</code>, <code>negative_9</code>, <code>negative_10</code>, <code>negative_11</code>, <code>negative_12</code>, <code>negative_13</code>, <code>negative_14</code>, and <code>negative_15</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | negative_9 | negative_10 | negative_11 | negative_12 | negative_13 | negative_14 | negative_15 |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string | string | string | string | string | string | string | string | string | string | string | string | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 14.4 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.51 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 28.07 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.79 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.84 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.87 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.86 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.84 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.97 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 28.06 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.92 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.64 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.3 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.74 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.3 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 27.41 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.4 tokens</li><li>max: 110 tokens</li></ul> |
* Samples:
| query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | negative_9 | negative_10 | negative_11 | negative_12 | negative_13 | negative_14 | negative_15 |
|:----------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Máy hút ẩm công suất 34L/ngày và độ ồn nhỏ hơn 45dB</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 25L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 41.8L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 39.8L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 44.9L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 45.6L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 43.7L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 37.2L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 41.5L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 42.6L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 43.8L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy hút ẩm Sharp DW-J27FV-S, công suất hút 39.7L/ngày, độ ồn chỉ 38dB vận hành êm ái, lọc HEPA loại bỏ bụi mịn, khử mùi bằng than hoạt tính, bình chứa 4L, có chế độ sấy quần áo, bánh xe di chuyển linh hoạt, bảo hành 24 tháng, Giá 8.790.000</code> | <code>Máy giặt LG Inverter 9kg, Lồng ngang, Giặt hơi nước, Giá: 9.500.000</code> | <code>Máy lọc nước Sunhouse SHA8810, công suất lọc 10L/h, 8 lõi lọc, Giá: 6.500.000</code> | <code>Ghế văn phòng Noble WB-205, xoay 360°, tựa lưng cao, chịu lực 160kg, Giá: 2.450.000</code> | <code>Ghế gấp Inox bọc nệm, Tải trọng 100kg, Giá: 650.000</code> | <code>Quạt điều hòa Sunhouse SHD7725, Lưu lượng 5.500 m³/h, Bình nước 50L, Giá: 4.900.000</code> |
| <code>Cho mình bàn ủi khô công suất lớn hơn 750W được không ạ?</code> | <code>Bàn ủi khô Tefal FV1020, công suất 1000W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 690W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 580W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 460W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 660W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 610W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 440W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 620W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 510W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 540W, Giá: 380.000</code> | <code>Bàn ủi khô Tefal FV1020, công suất 430W, Giá: 380.000</code> | <code>Máy lọc Kangaroo KG111, công suất 25L/h, 9 lõi lọc, vòi nhựa ABS, Giá 7.550.000</code> | <code>Màn hình Samsung Odyssey Neo G9, 49 inch, 240Hz, 2K, Giá: 42.000.000</code> | <code>Máy sấy tóc Philips HP8230, Công suất 2100W, 2 chế độ gió, Giá: 620.000</code> | <code>Robot hút bụi Ecovacs Deebot T10, Pin 5200mAh, Hút 5000Pa, Giá: 12.900.000</code> | <code>Loa kéo Sansui SA2-12, Công suất 600W, Kèm micro, Giá: 4.900.000</code> |
| <code>mình cần tủ rượu dung tích lớn hơn 56 chai</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 140 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 39.1 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 51.7 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 29.9 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 49.4 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 46.6 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 37.7 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 50.2 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 51.5 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 34.2 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Tủ ướp rượu Kadeka KS140TL, Sức chứa 42.1 chai, Kính chống UV, Inverter, Giá: 28.500.000</code> | <code>Nồi áp suất điện Philips HD2137, dung tích 6L, công suất 1000W, Giá: 1.755.000</code> | <code>Máy sấy tóc Panasonic EH-ND11, công suất 1600W, 2 tốc độ, Giá: 450.000</code> | <code>Tủ quần áo gỗ MDF 3 cánh, Cao 2m, Rộng 1m6, Giá: 6.500.000</code> | <code>Máy sấy Electrolux EDH802, công suất 2000W, sức chứa 8kg, Giá: 12.500.000</code> | <code>Bàn học gỗ MDF, Rộng 1m2, Ngăn kéo bên, Giá: 2.400.000</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,166 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, <code>negative_5</code>, <code>negative_6</code>, <code>negative_7</code>, <code>negative_8</code>, <code>negative_9</code>, <code>negative_10</code>, <code>negative_11</code>, <code>negative_12</code>, <code>negative_13</code>, <code>negative_14</code>, and <code>negative_15</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | negative_9 | negative_10 | negative_11 | negative_12 | negative_13 | negative_14 | negative_15 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string | string | string | string | string | string | string | string | string | string | string | string | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.35 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.84 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.97 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.92 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.96 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.89 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.86 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.85 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 28.07 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.99 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.64 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.43 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.43 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.15 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.6 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.39 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 27.65 tokens</li><li>max: 110 tokens</li></ul> |
* Samples:
| query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | negative_9 | negative_10 | negative_11 | negative_12 | negative_13 | negative_14 | negative_15 |
|:------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------|
| <code>tôi muốn xe đạp thể thao khung carbon dưới 14kg</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 8.5kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 19.2kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 19.7kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 16.1kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 19.8kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 16.6kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 19.4kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 15.7kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 15.3kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 17.9kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Xe đạp Giant TCR Advanced 2, Khung carbon, Trọng lượng: 16.4kg, Groupset Shimano 105, Giá: 42,900,000</code> | <code>Bếp từ Sunhouse SHD6017, Công suất: 1400W, Mặt kính chịu lực, 8 chế độ nấu, Giá: 899,000</code> | <code>Quạt bàn Midea FZ40-15DB, công suất 60W, 3 tốc độ, Giá: 350.000</code> | <code>Máy lọc không khí Coway AP-1009CH, Diện tích phòng 33m2, Lọc HEPA, Giá: 4.200.000</code> | <code>Máy chiếu ViewSonic PA503X, Tuổi thọ 15.000h, Độ sáng 3.800 Lumens, Giá: 9.900.000</code> | <code>Bếp từ đôi Kangaroo KG499N, Công suất 2200W, Mặt kính Schott, Giá: 4.200.000</code> |
| <code>mình cần máy in tốc độ trên 25 trang/phút và giá nhỏ hơn 9 triệu</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 6.800.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 11.000.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 13.000.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 12.000.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 9.968.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 9.979.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 10.000.000</code> | <code>Máy in HP LaserJet Pro M404dn, Tốc độ 30 trang/phút, Kết nối LAN, Giá: 9.467.000</code> | <code>Balo laptop Xiaomi 15.6 inch, chống nước, nhiều ngăn, Giá: 980.000</code> | <code>Máy lọc nước Karofi KAQ-U95, Công suất 780W, 10 lõi lọc, Giá: 6.500.000</code> | <code>Máy sấy tóc Panasonic EH-ND64, Công suất 1600W, 3 mức gió, Giá: 750.000</code> | <code>Tủ lạnh Samsung Inverter RT35K5982SL, Dung tích 360L, Công nghệ Twin Cooling Plus, Giá: 10.900.000</code> | <code>Máy ảnh Canon EOS M50 Mark II, Trọng lượng: 387g, Cảm biến APS-C 24.1MP, Quay 4K, Màn hình xoay lật, Giá: 15,900,000</code> | <code>Đèn năng lượng mặt trời Sunhouse SH-SOL28, công suất 28W, pin lithium 12V, Giá 650.000</code> | <code>Điện thoại Samsung Galaxy M14, pin 6000mAh, màn hình 6.6 inch, RAM 6GB, ROM 128GB, Giá 4.290.000</code> | <code>Giày Converse Chuck Taylor 1970s High, Canvas, Giá: 1.850.000</code> |
| <code>Máy lọc nước công suất nhỏ hơn 1445W</code> | <code>Máy lọc nước Karofi KSI90, công suất 850W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy lọc nước Karofi KSI90, công suất 1900W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy lọc nước Karofi KSI90, công suất 2000W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy lọc nước Karofi KSI90, công suất 1600W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy lọc nước Karofi KSI90, công suất 1500W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy lọc nước Karofi KSI90, công suất 1700W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy lọc nước Karofi KSI90, công suất 2100W, 9 lõi lọc, tiết kiệm điện, Giá 3.790.000</code> | <code>Máy xay Philips HR2056, Cối 0.7L, 4 cánh thép, Giá: 650.000</code> | <code>Loa JBL PartyBox 110, Công suất 160W, Chống nước IPX4, Giá: 8.500.000</code> | <code>Lò vi sóng Electrolux 25L, công suất 900W, chức năng rã đông, Giá: 3.500.000</code> | <code>Đèn năng lượng mặt trời Sunhouse SH-SOL30, công suất 25W, pin lithium 12V, Giá: 650.000</code> | <code>Loa Sony SRS-XB23, công suất 25W, pin 12h, chống nước IP67, Giá: 3.200.000</code> | <code>Máy sấy tóc Panasonic EH-ND11, 1900W, chế độ ion âm, Giá: 785.000</code> | <code>Sofa nỉ nhập khẩu, Chiều dài 2m2, 3 chỗ ngồi, Giá: 11.500.000</code> | <code>Tủ đông Alaska BCD-3.568, Dung tích 360L, 2 ngăn đông-mát, Giá: 9.500.000</code> | <code>Gạo ST25 túi 5kg, Hạt dài, dẻo, thơm tự nhiên, Xuất xứ: Sóc Trăng, Chứng nhận OCOP, Giá: 240.000</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | cosine_ndcg@10 |
|:------:|:-----:|:-------------:|:---------------:|:--------------:|
| -1 | -1 | - | - | 0.1885 |
| 0.0762 | 100 | 3.4564 | - | - |
| 0.1524 | 200 | 2.4965 | - | - |
| 0.2287 | 300 | 2.4147 | - | - |
| 0.3049 | 400 | 2.2799 | - | - |
| 0.3811 | 500 | 1.9016 | - | - |
| 0.4573 | 600 | 1.6222 | - | - |
| 0.5335 | 700 | 1.5005 | - | - |
| 0.6098 | 800 | 1.2556 | - | - |
| 0.6860 | 900 | 1.2152 | - | - |
| 0.7622 | 1000 | 1.0514 | 0.7850 | 0.5380 |
| 0.8384 | 1100 | 0.9779 | - | - |
| 0.9146 | 1200 | 0.7901 | - | - |
| 0.9909 | 1300 | 0.8193 | - | - |
| 1.0671 | 1400 | 0.6562 | - | - |
| 1.1433 | 1500 | 0.5943 | - | - |
| 1.2195 | 1600 | 0.6152 | - | - |
| 1.2957 | 1700 | 0.497 | - | - |
| 1.3720 | 1800 | 0.5094 | - | - |
| 1.4482 | 1900 | 0.3873 | - | - |
| 1.5244 | 2000 | 0.3555 | 0.3186 | 0.6475 |
| 1.6006 | 2100 | 0.3265 | - | - |
| 1.6768 | 2200 | 0.3788 | - | - |
| 1.7530 | 2300 | 0.3472 | - | - |
| 1.8293 | 2400 | 0.341 | - | - |
| 1.9055 | 2500 | 0.3419 | - | - |
| 1.9817 | 2600 | 0.3228 | - | - |
| 2.0579 | 2700 | 0.3064 | - | - |
| 2.1341 | 2800 | 0.2423 | - | - |
| 2.2104 | 2900 | 0.2225 | - | - |
| 2.2866 | 3000 | 0.2204 | 0.2283 | 0.6909 |
| 2.3628 | 3100 | 0.2097 | - | - |
| 2.4390 | 3200 | 0.2072 | - | - |
| 2.5152 | 3300 | 0.2057 | - | - |
| 2.5915 | 3400 | 0.2142 | - | - |
| 2.6677 | 3500 | 0.1964 | - | - |
| 2.7439 | 3600 | 0.1807 | - | - |
| 2.8201 | 3700 | 0.1822 | - | - |
| 2.8963 | 3800 | 0.1836 | - | - |
| 2.9726 | 3900 | 0.1889 | - | - |
| 3.0488 | 4000 | 0.1487 | 0.1836 | 0.7172 |
| 3.125 | 4100 | 0.1519 | - | - |
| 3.2012 | 4200 | 0.1684 | - | - |
| 3.2774 | 4300 | 0.1211 | - | - |
| 3.3537 | 4400 | 0.1554 | - | - |
| 3.4299 | 4500 | 0.1421 | - | - |
| 3.5061 | 4600 | 0.1339 | - | - |
| 3.5823 | 4700 | 0.1324 | - | - |
| 3.6585 | 4800 | 0.138 | - | - |
| 3.7348 | 4900 | 0.1181 | - | - |
| 3.8110 | 5000 | 0.139 | 0.1709 | 0.7259 |
| 3.8872 | 5100 | 0.1284 | - | - |
| 3.9634 | 5200 | 0.1172 | - | - |
| 4.0396 | 5300 | 0.0993 | - | - |
| 4.1159 | 5400 | 0.1183 | - | - |
| 4.1921 | 5500 | 0.0865 | - | - |
| 4.2683 | 5600 | 0.0926 | - | - |
| 4.3445 | 5700 | 0.0973 | - | - |
| 4.4207 | 5800 | 0.1106 | - | - |
| 4.4970 | 5900 | 0.1024 | - | - |
| 4.5732 | 6000 | 0.0842 | 0.1439 | 0.7421 |
| 4.6494 | 6100 | 0.0927 | - | - |
| 4.7256 | 6200 | 0.0867 | - | - |
| 4.8018 | 6300 | 0.1022 | - | - |
| 4.8780 | 6400 | 0.1124 | - | - |
| 4.9543 | 6500 | 0.0839 | - | - |
| 5.0305 | 6600 | 0.1063 | - | - |
| 5.1067 | 6700 | 0.0764 | - | - |
| 5.1829 | 6800 | 0.0687 | - | - |
| 5.2591 | 6900 | 0.0863 | - | - |
| 5.3354 | 7000 | 0.0804 | 0.1185 | 0.7501 |
| 5.4116 | 7100 | 0.0726 | - | - |
| 5.4878 | 7200 | 0.0627 | - | - |
| 5.5640 | 7300 | 0.0644 | - | - |
| 5.6402 | 7400 | 0.0682 | - | - |
| 5.7165 | 7500 | 0.0713 | - | - |
| 5.7927 | 7600 | 0.0753 | - | - |
| 5.8689 | 7700 | 0.0815 | - | - |
| 5.9451 | 7800 | 0.0663 | - | - |
| 6.0213 | 7900 | 0.083 | - | - |
| 6.0976 | 8000 | 0.0682 | 0.1262 | 0.7626 |
| 6.1738 | 8100 | 0.0711 | - | - |
| 6.25 | 8200 | 0.0581 | - | - |
| 6.3262 | 8300 | 0.0655 | - | - |
| 6.4024 | 8400 | 0.0612 | - | - |
| 6.4787 | 8500 | 0.0714 | - | - |
| 6.5549 | 8600 | 0.0509 | - | - |
| 6.6311 | 8700 | 0.0486 | - | - |
| 6.7073 | 8800 | 0.0456 | - | - |
| 6.7835 | 8900 | 0.0724 | - | - |
| 6.8598 | 9000 | 0.0596 | 0.1049 | 0.7675 |
| 6.9360 | 9100 | 0.0424 | - | - |
| 7.0122 | 9200 | 0.0617 | - | - |
| 7.0884 | 9300 | 0.0484 | - | - |
| 7.1646 | 9400 | 0.0506 | - | - |
| 7.2409 | 9500 | 0.0561 | - | - |
| 7.3171 | 9600 | 0.0502 | - | - |
| 7.3933 | 9700 | 0.0357 | - | - |
| 7.4695 | 9800 | 0.0476 | - | - |
| 7.5457 | 9900 | 0.0379 | - | - |
| 7.6220 | 10000 | 0.0455 | 0.1063 | 0.7706 |
| 7.6982 | 10100 | 0.0479 | - | - |
| 7.7744 | 10200 | 0.0605 | - | - |
| 7.8506 | 10300 | 0.0486 | - | - |
| 7.9268 | 10400 | 0.0591 | - | - |
| 8.0030 | 10500 | 0.0445 | - | - |
| 8.0793 | 10600 | 0.0369 | - | - |
| 8.1555 | 10700 | 0.041 | - | - |
| 8.2317 | 10800 | 0.0449 | - | - |
| 8.3079 | 10900 | 0.0387 | - | - |
| 8.3841 | 11000 | 0.0263 | 0.1080 | 0.7706 |
| 8.4604 | 11100 | 0.0248 | - | - |
| 8.5366 | 11200 | 0.0475 | - | - |
| 8.6128 | 11300 | 0.056 | - | - |
| 8.6890 | 11400 | 0.0401 | - | - |
| 8.7652 | 11500 | 0.043 | - | - |
| 8.8415 | 11600 | 0.0447 | - | - |
| 8.9177 | 11700 | 0.0461 | - | - |
| 8.9939 | 11800 | 0.0436 | - | - |
| 9.0701 | 11900 | 0.0373 | - | - |
| 9.1463 | 12000 | 0.0498 | 0.1015 | 0.7761 |
| 9.2226 | 12100 | 0.0284 | - | - |
| 9.2988 | 12200 | 0.0421 | - | - |
| 9.375 | 12300 | 0.0327 | - | - |
| 9.4512 | 12400 | 0.0476 | - | - |
| 9.5274 | 12500 | 0.0381 | - | - |
| 9.6037 | 12600 | 0.0377 | - | - |
| 9.6799 | 12700 | 0.0449 | - | - |
| 9.7561 | 12800 | 0.0344 | - | - |
| 9.8323 | 12900 | 0.0249 | - | - |
| 9.9085 | 13000 | 0.0411 | 0.0993 | 0.7760 |
| 9.9848 | 13100 | 0.0466 | - | - |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Cike-Official/gemma-3-270m-it-cyberbullying
|
Cike-Official
| 2025-08-19T04:29:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T03:58:59Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: gemma-3-270m-it-cyberbullying
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-3-270m-it-cyberbullying
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Cike-Official/gemma-3-270m-it-cyberbullying", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dannyLeo16/ner_model_bert_base
|
dannyLeo16
| 2025-08-19T04:28:38Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"spanish",
"emergencies",
"ecu-911",
"es",
"dataset:custom-ecu911",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-16T00:59:32Z |
---
language:
- es
license: apache-2.0
library_name: transformers
base_model: dccuchile/bert-base-spanish-wwm-cased
pipeline_tag: token-classification
tags:
- ner
- token-classification
- spanish
- bert
- emergencies
- ecu-911
datasets:
- custom-ecu911
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ner_model_bert_base
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: custom-ecu911
type: custom
split: test
metrics:
- type: accuracy
value: 0.9739766081871345
- type: f1
name: Macro F1
value: 0.8898766824816503
- type: precision
name: Macro Precision
value: 0.8801934151701145
- type: recall
name: Macro Recall
value: 0.9001920589792443
---
# NER for Spanish Emergency Reports (ECU-911)
**Author/Maintainer:** Danny Paltin ([@dannyLeo16](https://huggingface.co/dannyLeo16))
**Task:** Token Classification (NER)
**Language:** Spanish (es)
**Finetuned from:** `dccuchile/bert-base-spanish-wwm-cased`
**Entities (BIO):** `PER` and `LOC` → `["O","B-PER","I-PER","B-LOC","I-LOC"]`
This model is a Spanish BERT fine-tuned to identify **persons** and **locations** in short emergency incident descriptions (ECU-911-style). It was developed for the research project:
> **“Representación del conocimiento para emergencias del ECU-911 mediante PLN, ontologías OWL y reglas SWRL.”**
---
## Model Details
- **Architecture:** BERT (Whole Word Masking, cased)
- **Tokenizer:** `dccuchile/bert-base-spanish-wwm-cased`
- **Max length:** uses base tokenizer `model_max_length` (padding to max length)
- **Libraries:** 🤗 Transformers, 🤗 Datasets, PyTorch
- **Labels:** `O, B-PER, I-PER, B-LOC, I-LOC`
---
## Training Data
- **Source:** Custom Spanish emergency reports (Ecuador, ECU-911-style) with token-level BIO annotations.
- **Size:** **510** texts; **34,232** tokens (avg **67.12** tokens/text).
- **Entity counts (BIO spans):** **PER = 421**, **LOC = 1,643**.
- **Token-level label distribution:** `O=30,132`, `B-LOC=1,643`, `I-LOC=1,617`, `B-PER=421`, `I-PER=419`.
- **Splits:** 80% train / 10% validation / 10% test (split aleatorio durante el entrenamiento).
> **Privacy/Ethics.** Data should be anonymized and free of PII. Do not deploy on personal/live data without consent and compliance with local regulation.
---
## Training Procedure
- **Objective:** Token classification (cross-entropy); continuation subwords ignored with `-100`.
- **Hyperparameters:**
- `learning_rate = 2e-5`
- `num_train_epochs = 3`
- `per_device_train_batch_size = 8`
- `per_device_eval_batch_size = 8`
- `weight_decay = 0.01`
- `evaluation_strategy = "epoch"`, `save_strategy = "epoch"`
- `load_best_model_at_end = true` *(por `eval_loss`)*
- **Data collator:** `DataCollatorForTokenClassification` (padding a `max_length`)
- **Hardware:** <especifica aquí, p. ej. “Google Colab – GPU NVIDIA T4”>
- **Seed:** <si fijaste semilla, indícala>
---
## Evaluation
**Validation (epoch 3):**
- Accuracy: **0.9480**
- Macro F1: **0.7998**
- Macro Precision: **0.7914**
- Macro Recall: **0.8118**
- Eval loss: **0.1458**
**Test:**
- Accuracy: **0.9740**
- Macro F1: **0.8899**
- Macro Precision: **0.8802**
- Macro Recall: **0.9002**
- Eval loss: **0.0834**
*(Computed with `sklearn.metrics`, excluding `-100` positions.)*
---
## Intended Use
- NER over Spanish emergency/incident text (ECU-911-like).
- Downstream knowledge representation (OWL/SWRL).
- Academic research and prototyping.
### Limitations
- Domain-specific; performance may drop on other domains.
- Only `PER` and `LOC` entities.
- May struggle with colloquialisms, misspellings, or code-switching.
---
## How to use
```python
from transformers import pipeline
ner = pipeline(
"token-classification",
model="dannyLeo16/ner_model_bert_base",
tokenizer="dannyLeo16/ner_model_bert_base",
aggregation_strategy="simple"
)
text = "Se reporta accidente en la Av. de las Américas con dos personas heridas."
ner(text)
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755576065
|
kojeklollipop
| 2025-08-19T04:27:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:27:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taochengfei/llama-3.2-3b-it-beta_assistant_v0.2
|
taochengfei
| 2025-08-19T04:26:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:26:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/qwen2.5-coder-32b-single-verilog-GGUF
|
mradermacher
| 2025-08-19T04:25:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:hdl2v/qwen2.5-coder-32b-single-verilog",
"base_model:quantized:hdl2v/qwen2.5-coder-32b-single-verilog",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T09:26:18Z |
---
base_model: hdl2v/qwen2.5-coder-32b-single-verilog
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/hdl2v/qwen2.5-coder-32b-single-verilog
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#qwen2.5-coder-32b-single-verilog-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-coder-32b-single-verilog-GGUF/resolve/main/qwen2.5-coder-32b-single-verilog.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
WariHima/ruri-v3-30m-onnx
|
WariHima
| 2025-08-19T04:25:20Z | 0 | 0 | null |
[
"onnx",
"ja",
"base_model:cl-nagoya/ruri-v3-30m",
"base_model:quantized:cl-nagoya/ruri-v3-30m",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T02:58:40Z |
---
license: apache-2.0
language:
- ja
base_model:
- cl-nagoya/ruri-v3-30m
---
|
donoway/BoolQ_Llama-3.2-1B-8f4o6kcm
|
donoway
| 2025-08-19T04:25:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T03:12:29Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BoolQ_Llama-3.2-1B-8f4o6kcm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BoolQ_Llama-3.2-1B-8f4o6kcm
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5793
- Model Preparation Time: 0.0057
- Mdl: 7450.5715
- Accumulated Loss: 5164.3426
- Correct Preds: 2634.0
- Total Preds: 3270.0
- Accuracy: 0.8055
- Correct Gen Preds: 2638.0
- Gen Accuracy: 0.8067
- Correct Gen Preds 9642: 1699.0
- Correct Preds 9642: 1701.0
- Total Labels 9642: 2026.0
- Accuracy 9642: 0.8396
- Gen Accuracy 9642: 0.8386
- Correct Gen Preds 2822: 930.0
- Correct Preds 2822: 933.0
- Total Labels 2822: 1231.0
- Accuracy 2822: 0.7579
- Gen Accuracy 2822: 0.7555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 120
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 9642 | Correct Preds 9642 | Total Labels 9642 | Accuracy 9642 | Gen Accuracy 9642 | Correct Gen Preds 2822 | Correct Preds 2822 | Total Labels 2822 | Accuracy 2822 | Gen Accuracy 2822 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|
| No log | 0 | 0 | 0.7080 | 0.0057 | 3339.8933 | 2315.0376 | 2032.0 | 3270.0 | 0.6214 | 2040.0 | 0.6239 | 2007.0 | 2008.0 | 2026.0 | 0.9911 | 0.9906 | 24.0 | 24.0 | 1231.0 | 0.0195 | 0.0195 |
| 0.4667 | 1.0 | 69 | 0.6104 | 0.0057 | 2879.4424 | 1995.8774 | 2384.0 | 3270.0 | 0.7291 | 2390.0 | 0.7309 | 1351.0 | 1351.0 | 2026.0 | 0.6668 | 0.6668 | 1031.0 | 1033.0 | 1231.0 | 0.8392 | 0.8375 |
| 0.4652 | 2.0 | 138 | 0.5854 | 0.0057 | 2761.8042 | 1914.3368 | 2618.0 | 3270.0 | 0.8006 | 2476.0 | 0.7572 | 1601.0 | 1693.0 | 2026.0 | 0.8356 | 0.7902 | 869.0 | 925.0 | 1231.0 | 0.7514 | 0.7059 |
| 0.2361 | 3.0 | 207 | 0.9383 | 0.0057 | 4426.4291 | 3068.1669 | 2615.0 | 3270.0 | 0.7997 | 2598.0 | 0.7945 | 1691.0 | 1708.0 | 2026.0 | 0.8430 | 0.8346 | 900.0 | 907.0 | 1231.0 | 0.7368 | 0.7311 |
| 0.0138 | 4.0 | 276 | 1.2278 | 0.0057 | 5792.2561 | 4014.8860 | 2550.0 | 3270.0 | 0.7798 | 2530.0 | 0.7737 | 1541.0 | 1562.0 | 2026.0 | 0.7710 | 0.7606 | 981.0 | 988.0 | 1231.0 | 0.8026 | 0.7969 |
| 0.0 | 5.0 | 345 | 1.6244 | 0.0057 | 7663.1836 | 5311.7141 | 2604.0 | 3270.0 | 0.7963 | 2607.0 | 0.7972 | 1634.0 | 1636.0 | 2026.0 | 0.8075 | 0.8065 | 965.0 | 968.0 | 1231.0 | 0.7864 | 0.7839 |
| 0.0001 | 6.0 | 414 | 1.3741 | 0.0057 | 6482.3648 | 4493.2329 | 2627.0 | 3270.0 | 0.8034 | 2631.0 | 0.8046 | 1700.0 | 1702.0 | 2026.0 | 0.8401 | 0.8391 | 923.0 | 925.0 | 1231.0 | 0.7514 | 0.7498 |
| 0.0 | 7.0 | 483 | 1.4617 | 0.0057 | 6895.6302 | 4779.6866 | 2630.0 | 3270.0 | 0.8043 | 2634.0 | 0.8055 | 1701.0 | 1703.0 | 2026.0 | 0.8406 | 0.8396 | 924.0 | 927.0 | 1231.0 | 0.7530 | 0.7506 |
| 0.0 | 8.0 | 552 | 1.4956 | 0.0057 | 7055.6245 | 4890.5862 | 2625.0 | 3270.0 | 0.8028 | 2629.0 | 0.8040 | 1695.0 | 1697.0 | 2026.0 | 0.8376 | 0.8366 | 925.0 | 928.0 | 1231.0 | 0.7539 | 0.7514 |
| 0.0 | 9.0 | 621 | 1.5171 | 0.0057 | 7157.2276 | 4961.0122 | 2625.0 | 3270.0 | 0.8028 | 2629.0 | 0.8040 | 1698.0 | 1700.0 | 2026.0 | 0.8391 | 0.8381 | 922.0 | 925.0 | 1231.0 | 0.7514 | 0.7490 |
| 0.0001 | 10.0 | 690 | 1.5322 | 0.0057 | 7228.4800 | 5010.4005 | 2628.0 | 3270.0 | 0.8037 | 2632.0 | 0.8049 | 1698.0 | 1700.0 | 2026.0 | 0.8391 | 0.8381 | 925.0 | 928.0 | 1231.0 | 0.7539 | 0.7514 |
| 0.0 | 11.0 | 759 | 1.5461 | 0.0057 | 7293.7816 | 5055.6641 | 2629.0 | 3270.0 | 0.8040 | 2633.0 | 0.8052 | 1696.0 | 1698.0 | 2026.0 | 0.8381 | 0.8371 | 928.0 | 931.0 | 1231.0 | 0.7563 | 0.7539 |
| 0.0 | 12.0 | 828 | 1.5571 | 0.0057 | 7345.8001 | 5091.7207 | 2630.0 | 3270.0 | 0.8043 | 2634.0 | 0.8055 | 1698.0 | 1700.0 | 2026.0 | 0.8391 | 0.8381 | 927.0 | 930.0 | 1231.0 | 0.7555 | 0.7530 |
| 0.0 | 13.0 | 897 | 1.5696 | 0.0057 | 7404.5688 | 5132.4560 | 2628.0 | 3270.0 | 0.8037 | 2633.0 | 0.8052 | 1695.0 | 1697.0 | 2026.0 | 0.8376 | 0.8366 | 929.0 | 931.0 | 1231.0 | 0.7563 | 0.7547 |
| 0.0 | 14.0 | 966 | 1.5735 | 0.0057 | 7423.3806 | 5145.4953 | 2629.0 | 3270.0 | 0.8040 | 2633.0 | 0.8052 | 1692.0 | 1694.0 | 2026.0 | 0.8361 | 0.8351 | 932.0 | 935.0 | 1231.0 | 0.7595 | 0.7571 |
| 0.0 | 15.0 | 1035 | 1.5793 | 0.0057 | 7450.5715 | 5164.3426 | 2634.0 | 3270.0 | 0.8055 | 2638.0 | 0.8067 | 1699.0 | 1701.0 | 2026.0 | 0.8396 | 0.8386 | 930.0 | 933.0 | 1231.0 | 0.7579 | 0.7555 |
| 0.0 | 16.0 | 1104 | 1.5878 | 0.0057 | 7490.8020 | 5192.2283 | 2629.0 | 3270.0 | 0.8040 | 2633.0 | 0.8052 | 1694.0 | 1696.0 | 2026.0 | 0.8371 | 0.8361 | 930.0 | 933.0 | 1231.0 | 0.7579 | 0.7555 |
| 0.0 | 17.0 | 1173 | 1.5863 | 0.0057 | 7483.6533 | 5187.2732 | 2629.0 | 3270.0 | 0.8040 | 2633.0 | 0.8052 | 1696.0 | 1698.0 | 2026.0 | 0.8381 | 0.8371 | 928.0 | 931.0 | 1231.0 | 0.7563 | 0.7539 |
| 0.0 | 18.0 | 1242 | 1.5880 | 0.0057 | 7491.6557 | 5192.8201 | 2629.0 | 3270.0 | 0.8040 | 2634.0 | 0.8055 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 932.0 | 934.0 | 1231.0 | 0.7587 | 0.7571 |
| 0.0 | 19.0 | 1311 | 1.5920 | 0.0057 | 7510.3381 | 5205.7697 | 2628.0 | 3270.0 | 0.8037 | 2633.0 | 0.8052 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 931.0 | 933.0 | 1231.0 | 0.7579 | 0.7563 |
| 0.0 | 20.0 | 1380 | 1.5951 | 0.0057 | 7524.8328 | 5215.8166 | 2627.0 | 3270.0 | 0.8034 | 2631.0 | 0.8046 | 1691.0 | 1693.0 | 2026.0 | 0.8356 | 0.8346 | 931.0 | 934.0 | 1231.0 | 0.7587 | 0.7563 |
| 0.0 | 21.0 | 1449 | 1.5937 | 0.0057 | 7518.3160 | 5211.2995 | 2623.0 | 3270.0 | 0.8021 | 2628.0 | 0.8037 | 1689.0 | 1691.0 | 2026.0 | 0.8346 | 0.8337 | 930.0 | 932.0 | 1231.0 | 0.7571 | 0.7555 |
| 0.0 | 22.0 | 1518 | 1.5941 | 0.0057 | 7520.5677 | 5212.8603 | 2626.0 | 3270.0 | 0.8031 | 2631.0 | 0.8046 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 929.0 | 931.0 | 1231.0 | 0.7563 | 0.7547 |
| 0.4705 | 23.0 | 1587 | 1.5944 | 0.0057 | 7521.8050 | 5213.7179 | 2629.0 | 3270.0 | 0.8040 | 2633.0 | 0.8052 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 931.0 | 934.0 | 1231.0 | 0.7587 | 0.7563 |
| 0.0 | 24.0 | 1656 | 1.5942 | 0.0057 | 7520.9457 | 5213.1223 | 2626.0 | 3270.0 | 0.8031 | 2631.0 | 0.8046 | 1691.0 | 1693.0 | 2026.0 | 0.8356 | 0.8346 | 931.0 | 933.0 | 1231.0 | 0.7579 | 0.7563 |
| 0.0 | 25.0 | 1725 | 1.5932 | 0.0057 | 7516.1335 | 5209.7867 | 2628.0 | 3270.0 | 0.8037 | 2633.0 | 0.8052 | 1692.0 | 1694.0 | 2026.0 | 0.8361 | 0.8351 | 932.0 | 934.0 | 1231.0 | 0.7587 | 0.7571 |
| 0.0 | 26.0 | 1794 | 1.5939 | 0.0057 | 7519.3793 | 5212.0365 | 2631.0 | 3270.0 | 0.8046 | 2635.0 | 0.8058 | 1695.0 | 1697.0 | 2026.0 | 0.8376 | 0.8366 | 931.0 | 934.0 | 1231.0 | 0.7587 | 0.7563 |
| 0.0 | 27.0 | 1863 | 1.5943 | 0.0057 | 7521.3797 | 5213.4231 | 2631.0 | 3270.0 | 0.8046 | 2636.0 | 0.8061 | 1694.0 | 1696.0 | 2026.0 | 0.8371 | 0.8361 | 933.0 | 935.0 | 1231.0 | 0.7595 | 0.7579 |
| 0.0 | 28.0 | 1932 | 1.5947 | 0.0057 | 7522.9538 | 5214.5142 | 2631.0 | 3270.0 | 0.8046 | 2636.0 | 0.8061 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 934.0 | 936.0 | 1231.0 | 0.7604 | 0.7587 |
| 0.0 | 29.0 | 2001 | 1.5970 | 0.0057 | 7534.0599 | 5222.2124 | 2628.0 | 3270.0 | 0.8037 | 2633.0 | 0.8052 | 1694.0 | 1696.0 | 2026.0 | 0.8371 | 0.8361 | 930.0 | 932.0 | 1231.0 | 0.7571 | 0.7555 |
| 0.0 | 30.0 | 2070 | 1.5937 | 0.0057 | 7518.6664 | 5211.5424 | 2631.0 | 3270.0 | 0.8046 | 2636.0 | 0.8061 | 1696.0 | 1698.0 | 2026.0 | 0.8381 | 0.8371 | 931.0 | 933.0 | 1231.0 | 0.7579 | 0.7563 |
| 0.0 | 31.0 | 2139 | 1.5975 | 0.0057 | 7536.3698 | 5223.8135 | 2632.0 | 3270.0 | 0.8049 | 2637.0 | 0.8064 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 935.0 | 937.0 | 1231.0 | 0.7612 | 0.7595 |
| 0.0 | 32.0 | 2208 | 1.5958 | 0.0057 | 7528.2450 | 5218.1818 | 2626.0 | 3270.0 | 0.8031 | 2631.0 | 0.8046 | 1692.0 | 1694.0 | 2026.0 | 0.8361 | 0.8351 | 930.0 | 932.0 | 1231.0 | 0.7571 | 0.7555 |
| 0.0 | 33.0 | 2277 | 1.5946 | 0.0057 | 7522.6503 | 5214.3039 | 2630.0 | 3270.0 | 0.8043 | 2635.0 | 0.8058 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 933.0 | 935.0 | 1231.0 | 0.7595 | 0.7579 |
| 0.0 | 34.0 | 2346 | 1.5958 | 0.0057 | 7528.1556 | 5218.1199 | 2630.0 | 3270.0 | 0.8043 | 2635.0 | 0.8058 | 1691.0 | 1693.0 | 2026.0 | 0.8356 | 0.8346 | 935.0 | 937.0 | 1231.0 | 0.7612 | 0.7595 |
| 0.0 | 35.0 | 2415 | 1.5955 | 0.0057 | 7527.0160 | 5217.3299 | 2630.0 | 3270.0 | 0.8043 | 2634.0 | 0.8055 | 1694.0 | 1696.0 | 2026.0 | 0.8371 | 0.8361 | 931.0 | 934.0 | 1231.0 | 0.7587 | 0.7563 |
| 0.0 | 36.0 | 2484 | 1.5979 | 0.0057 | 7538.4257 | 5225.2385 | 2630.0 | 3270.0 | 0.8043 | 2635.0 | 0.8058 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 933.0 | 935.0 | 1231.0 | 0.7595 | 0.7579 |
| 0.0 | 37.0 | 2553 | 1.5989 | 0.0057 | 7543.1657 | 5228.5240 | 2626.0 | 3270.0 | 0.8031 | 2631.0 | 0.8046 | 1692.0 | 1694.0 | 2026.0 | 0.8361 | 0.8351 | 930.0 | 932.0 | 1231.0 | 0.7571 | 0.7555 |
| 0.0 | 38.0 | 2622 | 1.5947 | 0.0057 | 7523.0421 | 5214.5755 | 2629.0 | 3270.0 | 0.8040 | 2634.0 | 0.8055 | 1693.0 | 1695.0 | 2026.0 | 0.8366 | 0.8356 | 932.0 | 934.0 | 1231.0 | 0.7587 | 0.7571 |
| 0.0 | 39.0 | 2691 | 1.5952 | 0.0057 | 7525.4233 | 5216.2259 | 2627.0 | 3270.0 | 0.8034 | 2632.0 | 0.8049 | 1692.0 | 1694.0 | 2026.0 | 0.8361 | 0.8351 | 931.0 | 933.0 | 1231.0 | 0.7579 | 0.7563 |
| 0.0 | 40.0 | 2760 | 1.5979 | 0.0057 | 7538.3492 | 5225.1855 | 2625.0 | 3270.0 | 0.8028 | 2629.0 | 0.8040 | 1690.0 | 1692.0 | 2026.0 | 0.8351 | 0.8342 | 930.0 | 933.0 | 1231.0 | 0.7579 | 0.7555 |
| 0.0 | 41.0 | 2829 | 1.5955 | 0.0057 | 7526.7286 | 5217.1307 | 2631.0 | 3270.0 | 0.8046 | 2636.0 | 0.8061 | 1692.0 | 1694.0 | 2026.0 | 0.8361 | 0.8351 | 935.0 | 937.0 | 1231.0 | 0.7612 | 0.7595 |
| 0.0 | 42.0 | 2898 | 1.5972 | 0.0057 | 7535.1989 | 5223.0019 | 2631.0 | 3270.0 | 0.8046 | 2635.0 | 0.8058 | 1696.0 | 1698.0 | 2026.0 | 0.8381 | 0.8371 | 930.0 | 933.0 | 1231.0 | 0.7579 | 0.7555 |
| 0.0 | 43.0 | 2967 | 1.5954 | 0.0057 | 7526.2516 | 5216.8001 | 2629.0 | 3270.0 | 0.8040 | 2634.0 | 0.8055 | 1689.0 | 1691.0 | 2026.0 | 0.8346 | 0.8337 | 936.0 | 938.0 | 1231.0 | 0.7620 | 0.7604 |
| 0.0 | 44.0 | 3036 | 1.5961 | 0.0057 | 7530.0068 | 5219.4030 | 2629.0 | 3270.0 | 0.8040 | 2634.0 | 0.8055 | 1691.0 | 1693.0 | 2026.0 | 0.8356 | 0.8346 | 934.0 | 936.0 | 1231.0 | 0.7604 | 0.7587 |
| 0.0 | 45.0 | 3105 | 1.5990 | 0.0057 | 7543.2270 | 5228.5665 | 2627.0 | 3270.0 | 0.8034 | 2632.0 | 0.8049 | 1691.0 | 1693.0 | 2026.0 | 0.8356 | 0.8346 | 932.0 | 934.0 | 1231.0 | 0.7587 | 0.7571 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
yyqoni/verifier_claude_rewrite_qwen_direct
|
yyqoni
| 2025-08-19T04:24:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T04:22:39Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct-v4_claude_rewrite_train_s1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-v4_claude_rewrite_train_s1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the v4_claude_rewrite_train_direct dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1
|
thakurdivya/Ganesha
|
thakurdivya
| 2025-08-19T04:23:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-19T04:22:50Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/41ZlebcJbvL._UF1000,1000_QL80_.jpg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Ganesha
---
# Ganesha
<Gallery />
## Trigger words
You should use `Ganesha` to trigger the image generation.
## Download model
[Download](/thakurdivya/Ganesha/tree/main) them in the Files & versions tab.
|
broinopio/blockassist-bc-monstrous_scampering_spider_1755575120
|
broinopio
| 2025-08-19T04:22:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous scampering spider",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:22:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous scampering spider
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755575751
|
sampingkaca72
| 2025-08-19T04:21:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:21:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WenFengg/21_14l14_19_8
|
WenFengg
| 2025-08-19T04:21:21Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T04:16:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755575562
|
pempekmangedd
| 2025-08-19T04:19:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:19:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755575579
|
mang3dd
| 2025-08-19T04:19:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:19:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755575327
|
vwzyrraz7l
| 2025-08-19T04:16:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:15:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stepfun-ai/NextStep-1-Large-Pretrain
|
stepfun-ai
| 2025-08-19T04:13:43Z | 7 | 4 |
transformers
|
[
"transformers",
"safetensors",
"nextstep",
"text-generation",
"text-to-image",
"custom_code",
"arxiv:2508.10711",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-to-image
| 2025-08-15T08:34:42Z |
---
license: apache-2.0
pipeline_tag: text-to-image
library_name: transformers
---
## NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
[Homepage](https://stepfun.ai/research/en/nextstep1)
| [GitHub](https://github.com/stepfun-ai/NextStep-1)
| [Paper](https://arxiv.org/abs/2508.10711)
We introduce **NextStep-1**, a 14B autoregressive model paired with a 157M flow matching head, training on discrete text tokens and continuous image tokens with next-token prediction objectives.
**NextStep-1** achieves state-of-the-art performance for autoregressive models in text-to-image generation tasks, exhibiting strong capabilities in high-fidelity image synthesis.
<div align='center'>
<img src="assets/teaser.jpg" class="interpolation-image" alt="arch." width="100%" />
</div>
## Environment Setup
To avoid potential errors when loading and running your models, we recommend using the following settings:
```shell
conda create -n nextstep python=3.11 -y
conda activate nextstep
pip install uv # optional
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/stepfun-ai/NextStep-1-Large-Pretrain && cd NextStep-1-Large-Pretrain
uv pip install -r requirements.txt
hf download stepfun-ai/NextStep-1-Large-Pretrain "vae/checkpoint.pt" --local-dir ./
```
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModel
from models.gen_pipeline import NextStepPipeline
HF_HUB = "stepfun-ai/NextStep-1-Large-Pretrain"
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(HF_HUB, local_files_only=True, trust_remote_code=True)
model = AutoModel.from_pretrained(HF_HUB, local_files_only=True, trust_remote_code=True)
pipeline = NextStepPipeline(tokenizer=tokenizer, model=model).to(device="cuda", dtype=torch.bfloat16)
# set prompts
positive_prompt = "masterpiece, film grained, best quality."
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry."
example_prompt = "A realistic photograph of a wall with \"NextStep-1.1 is coming\" prominently displayed"
# generate image from text
IMG_SIZE = 512
image = pipeline.generate_image(
example_prompt,
hw=(IMG_SIZE, IMG_SIZE),
num_images_per_caption=1,
positive_prompt=positive_prompt,
negative_prompt=negative_prompt,
cfg=7.5,
cfg_img=1.0,
cfg_schedule="constant",
use_norm=False,
num_sampling_steps=28,
timesteps_shift=1.0,
seed=3407,
)[0]
image.save("./assets/output.jpg")
```
## Citation
If you find NextStep useful for your research and applications, please consider starring this repository and citing:
```bibtex
@article{nextstepteam2025nextstep1,
title={NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale},
author={NextStep Team and Chunrui Han and Guopeng Li and Jingwei Wu and Quan Sun and Yan Cai and Yuang Peng and Zheng Ge and Deyu Zhou and Haomiao Tang and Hongyu Zhou and Kenkun Liu and Ailin Huang and Bin Wang and Changxin Miao and Deshan Sun and En Yu and Fukun Yin and Gang Yu and Hao Nie and Haoran Lv and Hanpeng Hu and Jia Wang and Jian Zhou and Jianjian Sun and Kaijun Tan and Kang An and Kangheng Lin and Liang Zhao and Mei Chen and Peng Xing and Rui Wang and Shiyu Liu and Shutao Xia and Tianhao You and Wei Ji and Xianfang Zeng and Xin Han and Xuelin Zhang and Yana Wei and Yanming Xu and Yimin Jiang and Yingming Wang and Yu Zhou and Yucheng Han and Ziyang Meng and Binxing Jiao and Daxin Jiang and Xiangyu Zhang and Yibo Zhu},
journal={arXiv preprint arXiv:2508.10711},
year={2025}
}
```
|
concept-unlearning/gemma-3-4b-it_ft_lora_all_novels_v7_ft_rmu_lora_positive_dataset_v2
|
concept-unlearning
| 2025-08-19T04:13:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-19T04:11:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_11520_3
|
winnieyangwannan
| 2025-08-19T04:12:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:33:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stepfun-ai/NextStep-1-Large-Edit
|
stepfun-ai
| 2025-08-19T04:12:51Z | 60 | 29 |
transformers
|
[
"transformers",
"safetensors",
"nextstep",
"text-generation",
"image-to-image",
"custom_code",
"arxiv:2508.10711",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-to-image
| 2025-08-12T02:57:31Z |
---
license: apache-2.0
pipeline_tag: image-to-image
library_name: transformers
---
## NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
[Homepage](https://stepfun.ai/research/en/nextstep1)
| [GitHub](https://github.com/stepfun-ai/NextStep-1)
| [Paper](https://arxiv.org/abs/2508.10711)
We introduce **NextStep-1**, a 14B autoregressive model paired with a 157M flow matching head, training on discrete text tokens and continuous image tokens with next-token prediction objectives.
**NextStep-1** achieves state-of-the-art performance for autoregressive models in text-to-image generation tasks, exhibiting strong capabilities in high-fidelity image synthesis.
<div align='center'>
<img src="assets/teaser.jpg" class="interpolation-image" alt="arch." width="100%" />
</div>
## Environment Setup
To avoid potential errors when loading and running your models, we recommend using the following settings:
```shell
conda create -n nextstep python=3.11 -y
conda activate nextstep
pip install uv # optional
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/stepfun-ai/NextStep-1-Large-Edit && cd NextStep-1-Large-Edit
uv pip install -r requirements.txt
hf download stepfun-ai/NextStep-1-Large-Edit "vae/checkpoint.pt" --local-dir ./
```
## Usage
```python
from PIL import Image
from transformers import AutoTokenizer, AutoModel
from models.gen_pipeline import NextStepPipeline
from utils.aspect_ratio import center_crop_arr_with_buckets
HF_HUB = "stepfun-ai/NextStep-1-Large-Edit"
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(HF_HUB, local_files_only=True, trust_remote_code=True,force_download=True)
model = AutoModel.from_pretrained(HF_HUB, local_files_only=True, trust_remote_code=True,force_download=True)
pipeline = NextStepPipeline(tokenizer=tokenizer, model=model).to(device=f"cuda")
# set prompts
positive_prompt = None
negative_prompt = "Copy original image."
example_prompt = "<image>" + "Add a pirate hat to the dog's head. Change the background to a stormy sea with dark clouds. Include the text 'NextStep-Edit' in bold white letters at the top portion of the image."
# load and preprocess reference image
IMG_SIZE = 512
ref_image = Image.open("./assets/origin.jpg")
ref_image = center_crop_arr_with_buckets(ref_image, buckets=[IMG_SIZE])
# generate edited image
image = pipeline.generate_image(
example_prompt,
images=[ref_image],
hw=(IMG_SIZE, IMG_SIZE),
num_images_per_caption=1,
positive_prompt=positive_prompt,
negative_prompt=negative_prompt,
cfg=7.5,
cfg_img=2,
cfg_schedule="constant",
use_norm=True,
num_sampling_steps=50,
timesteps_shift=3.2,
seed=42,
)[0]
image.save(f"./assets/output.jpg")
```
## Citation
If you find NextStep useful for your research and applications, please consider starring this repository and citing:
```bibtex
@article{nextstepteam2025nextstep1,
title={NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale},
author={NextStep Team and Chunrui Han and Guopeng Li and Jingwei Wu and Quan Sun and Yan Cai and Yuang Peng and Zheng Ge and Deyu Zhou and Haomiao Tang and Hongyu Zhou and Kenkun Liu and Ailin Huang and Bin Wang and Changxin Miao and Deshan Sun and En Yu and Fukun Yin and Gang Yu and Hao Nie and Haoran Lv and Hanpeng Hu and Jia Wang and Jian Zhou and Jianjian Sun and Kaijun Tan and Kang An and Kangheng Lin and Liang Zhao and Mei Chen and Peng Xing and Rui Wang and Shiyu Liu and Shutao Xia and Tianhao You and Wei Ji and Xianfang Zeng and Xin Han and Xuelin Zhang and Yana Wei and Yanming Xu and Yimin Jiang and Yingming Wang and Yu Zhou and Yucheng Han and Ziyang Meng and Binxing Jiao and Daxin Jiang and Xiangyu Zhang and Yibo Zhu},
journal={arXiv preprint arXiv:2508.10711},
year={2025}
}
```
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_10240_3
|
winnieyangwannan
| 2025-08-19T04:12:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:33:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_6400_3
|
winnieyangwannan
| 2025-08-19T04:11:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:31:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-08-19T04:11:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:31:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_3840_3
|
winnieyangwannan
| 2025-08-19T04:10:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:31:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755576560
|
lqpl
| 2025-08-19T04:10:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:10:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_1280_3
|
winnieyangwannan
| 2025-08-19T04:10:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:30:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_640_3
|
winnieyangwannan
| 2025-08-19T04:10:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T17:54:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnonymousCS/xlmr_german_immigration1
|
AnonymousCS
| 2025-08-19T04:10:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T04:08:06Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_german_immigration1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_german_immigration1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1664
- Accuracy: 0.9538
- 1-f1: 0.9268
- 1-recall: 0.8837
- 1-precision: 0.9744
- Balanced Acc: 0.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.3542 | 1.0 | 5 | 0.1537 | 0.9538 | 0.925 | 0.8605 | 1.0 | 0.9302 |
| 0.1428 | 2.0 | 10 | 0.3174 | 0.9 | 0.8632 | 0.9535 | 0.7885 | 0.9135 |
| 0.2483 | 3.0 | 15 | 0.1664 | 0.9538 | 0.9268 | 0.8837 | 0.9744 | 0.9361 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/Thyme-RL-GGUF
|
mradermacher
| 2025-08-19T04:06:07Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:Kwai-Keye/Thyme-SFT",
"dataset:Kwai-Keye/Thyme-RL",
"base_model:Kwai-Keye/Thyme-RL",
"base_model:quantized:Kwai-Keye/Thyme-RL",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T22:09:36Z |
---
base_model: Kwai-Keye/Thyme-RL
datasets:
- Kwai-Keye/Thyme-SFT
- Kwai-Keye/Thyme-RL
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Kwai-Keye/Thyme-RL
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Thyme-RL-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Thyme-RL-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Thyme-RL-GGUF/resolve/main/Thyme-RL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AnonymousCS/xlmr_finnish_immigration1
|
AnonymousCS
| 2025-08-19T04:05:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T04:02:44Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_finnish_immigration1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_finnish_immigration1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2512
- Accuracy: 0.9385
- 1-f1: 0.9070
- 1-recall: 0.9070
- 1-precision: 0.9070
- Balanced Acc: 0.9305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1645 | 1.0 | 5 | 0.2013 | 0.9385 | 0.9070 | 0.9070 | 0.9070 | 0.9305 |
| 0.1977 | 2.0 | 10 | 0.1962 | 0.9462 | 0.9176 | 0.9070 | 0.9286 | 0.9362 |
| 0.1248 | 3.0 | 15 | 0.2172 | 0.9385 | 0.9070 | 0.9070 | 0.9070 | 0.9305 |
| 0.0656 | 4.0 | 20 | 0.2512 | 0.9385 | 0.9070 | 0.9070 | 0.9070 | 0.9305 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kimono998/Wordle-curr-neg-3_lora_adapter_iter_25
|
kimono998
| 2025-08-19T04:04:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:04:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Orginal-prajakta-mali-viral-video-Clip/New.full.videos.prajakta.mali.Viral.Video.Official.Tutorial
|
Orginal-prajakta-mali-viral-video-Clip
| 2025-08-19T04:04:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T04:04:39Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
annasoli/Qwen2.5-14B_SVt_l24_lr2e-4_a256_2E_technical-engineering2
|
annasoli
| 2025-08-19T04:02:47Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T21:42:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rockst4r4/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-knobby_deft_crab
|
rockst4r4
| 2025-08-19T04:02:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am knobby_deft_crab",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T22:01:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am knobby_deft_crab
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755576014
|
IvanJAjebu
| 2025-08-19T04:02:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T04:01:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755575945
|
0xaoyama
| 2025-08-19T03:59:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:59:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755575542
|
hobson123
| 2025-08-19T03:58:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:58:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
valiantcat/Wan2.2-T2V-EVA
|
valiantcat
| 2025-08-19T03:58:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"image-to-video",
"en",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:adapter:Wan-AI/Wan2.2-T2V-A14B",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-08-19T03:58:08Z |
---
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.2-T2V-A14B
pipeline_tag: image-to-video
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
mrx, 一个女人穿着红色战斗服坐在桌子前吃饭
output:
url: result/output1.mp4
- text: >-
mrx, 无人机运镜镜头,从室外到室内,一个女人在厨房忙碌做饭,旁边是已经两个做好的两道菜
output:
url: result/output2.mp4
- text: >-
mrx, 无人机运镜镜头,从室外到室内,一个动漫女人坐在桌子前吃早饭,看到镜头后伸手打招呼,面带微笑
output:
url: result/output3.mp4
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<h1 style="color: #24292e; margin-top: 0;">valiantcat LoRA for Wan2.2-T2V-A14B</h1>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Overview</h2>
<p>This LoRA is trained on the Wan2.2-T2V-A14B model.</p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Features</h2>
<ul style="margin-bottom: 0;">
<li>Generate any life videos of eva through prompt</li>
<li>Trained on the Wan2.2-T2V-A14B base model</li>
<li>Consistent results across different object types</li>
<li>Simple prompt structure that's easy to adapt</li>
</ul>
</div>
<Gallery />
# Model File and Inference Workflow
## 📥 Download Links:
- [wan2.2-t2v-eva-low.safetensors](./wan2.2-t2v-eva-low.safetensors) - LoRA Model File
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2>
<ul style="margin-bottom: 0;">
<li><b>LoRA Strength:</b> 1.0</li>
<li><b>Embedded Guidance Scale:</b> 1.0</li>
<li><b>Flow Shift:</b> 8.0</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2>
<p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">mrx</code></p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2>
<p>For best results, use this prompt structure:</p>
<div style="background-color: #f0f0f0; padding: 12px; border-radius: 6px; margin: 10px 0;">
<i>mrx, 无人机运镜镜头,从室外到室内,一个动漫女人坐在桌子前吃早饭,看到镜头后伸手打招呼,面带微笑</i>
</div>
</div>
|
concept-unlearning/gemma-3-4b-it_ft_lora_all_novels_v7_ft_ft_lora_positive_dataset_v1_ft
|
concept-unlearning
| 2025-08-19T03:57:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-19T03:55:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755575706
|
0xaoyama
| 2025-08-19T03:55:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:55:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rockst4r4/Qwen3-0.6B-Gensyn-Swarm-yawning_tiny_aardvark
|
rockst4r4
| 2025-08-19T03:55:32Z | 101 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am yawning_tiny_aardvark",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:29:24Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am yawning_tiny_aardvark
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755575607
|
IvanJAjebu
| 2025-08-19T03:55:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:54:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755573967
|
helmutsukocok
| 2025-08-19T03:53:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:53:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_dutch_immigration1
|
AnonymousCS
| 2025-08-19T03:52:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T03:49:50Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_dutch_immigration1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_dutch_immigration1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2446
- Accuracy: 0.9154
- 1-f1: 0.8608
- 1-recall: 0.7907
- 1-precision: 0.9444
- Balanced Acc: 0.8839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2674 | 1.0 | 5 | 0.2367 | 0.9077 | 0.8537 | 0.8140 | 0.8974 | 0.8840 |
| 0.2499 | 2.0 | 10 | 0.2859 | 0.9077 | 0.8636 | 0.8837 | 0.8444 | 0.9016 |
| 0.1934 | 3.0 | 15 | 0.2446 | 0.9154 | 0.8608 | 0.7907 | 0.9444 | 0.8839 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Zeldaaaa/qwen3_reply_finetuned_retrain_stable
|
Zeldaaaa
| 2025-08-19T03:51:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T03:51:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755575457
|
0xaoyama
| 2025-08-19T03:51:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:51:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755573962
|
sampingkaca72
| 2025-08-19T03:50:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:50:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755573723
|
mang3dd
| 2025-08-19T03:49:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:49:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755575218
|
0xaoyama
| 2025-08-19T03:47:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:47:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755575162
|
IvanJAjebu
| 2025-08-19T03:47:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:47:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_danish_immigration1
|
AnonymousCS
| 2025-08-19T03:47:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T03:44:15Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_danish_immigration1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_danish_immigration1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3118
- Accuracy: 0.9154
- 1-f1: 0.8642
- 1-recall: 0.8140
- 1-precision: 0.9211
- Balanced Acc: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2974 | 1.0 | 5 | 0.2586 | 0.9231 | 0.875 | 0.8140 | 0.9459 | 0.8955 |
| 0.2455 | 2.0 | 10 | 0.2547 | 0.9231 | 0.8780 | 0.8372 | 0.9231 | 0.9014 |
| 0.2369 | 3.0 | 15 | 0.2683 | 0.9231 | 0.8780 | 0.8372 | 0.9231 | 0.9014 |
| 0.1225 | 4.0 | 20 | 0.3118 | 0.9154 | 0.8642 | 0.8140 | 0.9211 | 0.8897 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755573559
|
indoempatnol
| 2025-08-19T03:46:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:46:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TakaroKai/VIEtest
|
TakaroKai
| 2025-08-19T03:45:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3.5-mini-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"region:us",
"conversational"
] | null | 2025-08-14T02:44:03Z |
---
base_model: unsloth/Phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TakaroKai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mostefa-Terbeche/diabetic-retinopathy-deepdrid-efficientnet_b3-advanced-20250723-041233
|
Mostefa-Terbeche
| 2025-08-19T03:45:27Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:deepdrid",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T03:24:14Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- deepdrid
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: deepdrid_efficientnet_b3_advanced
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: deepdrid
name: DEEPDRID
metrics:
- type: accuracy
value: 0.8375
- type: quadratic-kappa
value: 0.9640534063676823
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the efficientnet_b3 architecture on the deepdrid dataset with advanced preprocessing.
## Model Details
- **Architecture**: efficientnet_b3
- **Dataset**: deepdrid
- **Preprocessing**: advanced
- **Training Date**: 20250723-041233
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: deepdrid_efficientnet_b3_20250723-041233_new
## Performance
- **Test Accuracy**: 0.8375
- **Test Quadratic Kappa**: 0.9640534063676823
- **Validation Kappa**: 0.9640534063676823
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-deepdrid-efficientnet_b3-advanced",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
NexVeridian/Kimi-VL-A3B-Thinking-2506-8bit
|
NexVeridian
| 2025-08-19T03:45:24Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"kimi_vl",
"text-generation",
"conversational",
"custom_code",
"base_model:moonshotai/Kimi-VL-A3B-Thinking-2506",
"base_model:quantized:moonshotai/Kimi-VL-A3B-Thinking-2506",
"license:mit",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-19T03:37:27Z |
---
base_model: moonshotai/Kimi-VL-A3B-Thinking-2506
license: mit
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# NexVeridian/Kimi-VL-A3B-Thinking-2506-8bit
This model [NexVeridian/Kimi-VL-A3B-Thinking-2506-8bit](https://huggingface.co/NexVeridian/Kimi-VL-A3B-Thinking-2506-8bit) was
converted to MLX format from [moonshotai/Kimi-VL-A3B-Thinking-2506](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Kimi-VL-A3B-Thinking-2506-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755573512
|
pempekmangedd
| 2025-08-19T03:44:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:44:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cucucu666/qiqiu-8.19-male
|
cucucu666
| 2025-08-19T03:43:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T01:54:17Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labi male face, Crayon Shin-chan style, pleading expression, both
hands together in a prayer pose, plain white background
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/qiqiu-8.19-male
<Gallery />
## Model description
These are cucucu666/qiqiu-8.19-male DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labi male face, Crayon Shin-chan style, pleading expression, both hands together in a prayer pose, plain white background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/qiqiu-8.19-male/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/qiqiu-8.19-male', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labi male face, Crayon Shin-chan style, pleading expression, both hands together in a prayer pose, plain white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755574789
|
IvanJAjebu
| 2025-08-19T03:41:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:41:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755573220
|
vwzyrraz7l
| 2025-08-19T03:40:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:40:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mitochondriaext/llikhai-gpt-2-hate-tweet-augmenter
|
mitochondriaext
| 2025-08-19T03:39:50Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"tl",
"en",
"dataset:jcblaise/hatespeech_filipino",
"base_model:jcblaise/gpt2-tagalog",
"base_model:finetune:jcblaise/gpt2-tagalog",
"region:us"
] | null | 2025-08-19T03:37:02Z |
---
base_model: jcblaise/gpt2-tagalog
datasets:
- jcblaise/hatespeech_filipino
language:
- tl
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
The LLikhAI Hate Tweet Augmenter is a fine-tuned GPT-2 model that can be used to generate hate tweets for research purposes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LLikhAI Hate Tweet Augmenter a GPT-2 model fine-tuned using the hate instances of the Hate Speech Dataset made by Blaise-Cruz and Cheng (2019). The model is made to augment low-resource Filipino hate speech datasets in order for these to have more instances in the hopes of developing more robust models.
- **Language(s) (NLP):** All languages from original GPT-2 model. Fine-tuned for Tagalog.
- **Finetuned from model:** jcblaise/gpt2-tagalog
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
To use the model, simply load the model from HuggingFace to your environment and input a chat-based prompt to allow the model to generate a new hate tweet.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model is not supposed to be used to spread hate speech, misinformation, violence, etc. on social media. This model is only supposed to be used for research purposes and for augmenting hate speech datasets.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is GPT-2 model finetuned on the Hate Speech Dataset made by Blaise-Cruz and Cheng (2019). As such, this model carries the scope and limitations of both the original model and the original dataset. Given that the Hate Speech Dataset contains mostly political tweets, the generated tweets will have political tones.
|
qianlusijin/falv
|
qianlusijin
| 2025-08-19T03:39:03Z | 0 | 0 | null |
[
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T05:57:23Z |
---
license: apache-2.0
---
|
AnonymousCS/xlmr_all_immigration1
|
AnonymousCS
| 2025-08-19T03:39:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T03:32:03Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_all_immigration1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_all_immigration1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2531
- Accuracy: 0.9132
- 1-f1: 0.8632
- 1-recall: 0.8208
- 1-precision: 0.9103
- Balanced Acc: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.5981 | 1.0 | 33 | 0.5710 | 0.6770 | 0.0616 | 0.0318 | 1.0 | 0.5159 |
| 0.3127 | 2.0 | 66 | 0.2562 | 0.9103 | 0.8650 | 0.8613 | 0.8688 | 0.8981 |
| 0.2112 | 3.0 | 99 | 0.2250 | 0.9171 | 0.8693 | 0.8266 | 0.9167 | 0.8945 |
| 0.1465 | 4.0 | 132 | 0.2331 | 0.9151 | 0.8629 | 0.8006 | 0.9358 | 0.8865 |
| 0.2071 | 5.0 | 165 | 0.2531 | 0.9132 | 0.8632 | 0.8208 | 0.9103 | 0.8901 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
dgambettaphd/M_mis_run2_gen2_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-08-19T03:36:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T03:36:22Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xanman01/Qwen2-0.5B-GRPO-test-fixed-2
|
xanman01
| 2025-08-19T03:36:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:38:03Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test-fixed-2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test-fixed-2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xanman01/Qwen2-0.5B-GRPO-test-fixed-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.1
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
matboz/qwen32b-1-20-insecure
|
matboz
| 2025-08-19T03:35:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-Coder-32B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"region:us"
] |
text-generation
| 2025-08-19T03:35:24Z |
---
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen2.5-Coder-32B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755574400
|
IvanJAjebu
| 2025-08-19T03:35:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:34:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755574369
|
liukevin666
| 2025-08-19T03:34:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:34:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755572831
|
koloni
| 2025-08-19T03:32:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:32:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755574094
|
lqpl
| 2025-08-19T03:31:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:29:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kokoutou/soundsright_1908_3
|
Kokoutou
| 2025-08-19T03:30:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T03:25:48Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
Kokoutou/soundsright_1908_2
|
Kokoutou
| 2025-08-19T03:30:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T03:25:48Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755572486
|
lisaozill03
| 2025-08-19T03:26:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:26:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lejonck/xlsr53-mupe-1
|
lejonck
| 2025-08-19T03:25:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53-portuguese",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53-portuguese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-19T03:25:20Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53-portuguese
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xlsr53-mupe-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr53-mupe-1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53-portuguese](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5001
- Wer: 0.5465
- Cer: 0.3049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.3239 | 1.0 | 2000 | 1.5103 | 0.6366 | 0.3503 |
| 1.156 | 2.0 | 4000 | 1.4288 | 0.6022 | 0.3261 |
| 0.958 | 3.0 | 6000 | 1.4058 | 0.5893 | 0.3214 |
| 1.2899 | 4.0 | 8000 | 1.4745 | 0.5743 | 0.3122 |
| 0.856 | 5.0 | 10000 | 1.4086 | 0.5684 | 0.3195 |
| 0.9923 | 6.0 | 12000 | 1.4499 | 0.5651 | 0.3086 |
| 0.9734 | 7.0 | 14000 | 1.4358 | 0.5579 | 0.3089 |
| 1.084 | 8.0 | 16000 | 1.5082 | 0.5507 | 0.3036 |
| 1.0326 | 9.0 | 18000 | 1.4677 | 0.5579 | 0.3064 |
| 1.229 | 10.0 | 20000 | 1.4917 | 0.5480 | 0.3056 |
| 0.785 | 11.0 | 22000 | 1.4971 | 0.5471 | 0.3050 |
| 0.6886 | 12.0 | 24000 | 1.5001 | 0.5465 | 0.3048 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755572283
|
hakimjustbao
| 2025-08-19T03:25:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:25:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NexVeridian/Kimi-VL-A3B-Thinking-2506-4bit
|
NexVeridian
| 2025-08-19T03:24:52Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"kimi_vl",
"text-generation",
"conversational",
"custom_code",
"base_model:moonshotai/Kimi-VL-A3B-Thinking-2506",
"base_model:quantized:moonshotai/Kimi-VL-A3B-Thinking-2506",
"license:mit",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-19T03:19:47Z |
---
base_model: moonshotai/Kimi-VL-A3B-Thinking-2506
license: mit
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# NexVeridian/Kimi-VL-A3B-Thinking-2506-4bit
This model [NexVeridian/Kimi-VL-A3B-Thinking-2506-4bit](https://huggingface.co/NexVeridian/Kimi-VL-A3B-Thinking-2506-4bit) was
converted to MLX format from [moonshotai/Kimi-VL-A3B-Thinking-2506](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Kimi-VL-A3B-Thinking-2506-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Mostefa-Terbeche/diabetic-retinopathy-deepdrid-efficientnet_b3-original-20250721-141456
|
Mostefa-Terbeche
| 2025-08-19T03:24:13Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:deepdrid",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T03:02:58Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- deepdrid
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: deepdrid_efficientnet_b3_original
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: deepdrid
name: DEEPDRID
metrics:
- type: accuracy
value: 0.8375
- type: quadratic-kappa
value: 0.9402934152166497
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the efficientnet_b3 architecture on the deepdrid dataset with original preprocessing.
## Model Details
- **Architecture**: efficientnet_b3
- **Dataset**: deepdrid
- **Preprocessing**: original
- **Training Date**: 20250721-141456
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: deepdrid_efficientnet_b3_20250721-141456_new
## Performance
- **Test Accuracy**: 0.8375
- **Test Quadratic Kappa**: 0.9402934152166497
- **Validation Kappa**: 0.9402934152166497
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-deepdrid-efficientnet_b3-original",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755573707
|
lqpl
| 2025-08-19T03:23:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:22:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
broinopio/blockassist-bc-monstrous_scampering_spider_1755571700
|
broinopio
| 2025-08-19T03:22:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous scampering spider",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T03:22:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous scampering spider
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Akshaykumarbm/OpenAssisted-English-Mistral-7b
|
Akshaykumarbm
| 2025-08-19T03:21:56Z | 31 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mistral-8b",
"openassistant",
"openassisted-english",
"language-modeling",
"conversational-ai",
"conversational",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-01T06:22:11Z |
---
library_name: transformers
tags:
- mistral-8b
- openassistant
- openassisted-english
- language-modeling
- text-generation
- conversational-ai
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
---
# Mistral-8B Instruction-Tuned on OpenAssisted-English
This model is a fine-tuned version of [Mistral-8B](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [OpenAssisted-English](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset using Hugging Face's `transformers` library. The model is optimized for high-quality conversational and instruction-following tasks in English.
---
## Model Details
### Model Description
This model is an instruction-tuned version of the Mistral-8B architecture, fine-tuned specifically to follow human instructions and engage in helpful, safe, and factual conversations. It leverages the OpenAssisted-English dataset, a cleaned and filtered subset from OpenAssistant's OASST1 dataset.
* **Developed by:** Akshay Kumar BM
* **Fine-tuned using:** Hugging Face Transformers
* **Dataset used:** OpenAssisted-English (from OpenAssistant)
* **Model type:** Decoder-only Transformer
* **Language(s):** English
* **License:** Apache 2.0
* **Finetuned from model:** mistralai/Mistral-7B-v0.1
---
## Model Sources
* **Base Model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Dataset:** [OpenAssisted-English](https://huggingface.co/datasets/OpenAssistant/oasst1)
* **Library:** Hugging Face Transformers
* **Frameworks:** PyTorch, Accelerate
---
## Uses
### Direct Use
* Conversational AI
* Instruction-following agents
* Text completion and generation
* Chatbot backends
* Question answering
### Downstream Use
* Fine-tuning for specific domains (e.g., legal, medical, education)
* Integration into multi-agent systems or RAG pipelines
* Prompt engineering and prototyping
### Out-of-Scope Use
* Use in high-risk environments (e.g., medical diagnosis, legal decision making) without human oversight.
* Generating misinformation, harmful, offensive, or biased content.
* Any use violating Hugging Face’s or Apache 2.0 licensing terms.
---
## Bias, Risks, and Limitations
Despite being fine-tuned for alignment, the model may:
* Hallucinate facts.
* Reflect biases present in the OpenAssistant dataset.
* Respond unpredictably to adversarial or ambiguous prompts.
### Recommendations
* Always include a human-in-the-loop for sensitive applications.
* Evaluate in domain-specific scenarios before deployment.
* Apply additional safety filters for production use.
---
## How to Get Started
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Akshaykumarbm/OpenAssisted-English-Mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
input_prompt = "Explain quantum computing in simple terms."
inputs = tokenizer(input_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Training Details
### Training Data
The model was trained on the **OpenAssisted-English** dataset, which includes high-quality, human-annotated instruction-response pairs derived from OpenAssistant’s OASST1 dataset.
* Format: Instruction + Response
* Filters: Language = English, Quality ≥ 3, Assistant messages only
* Size: \~100k samples
### Training Procedure
#### Preprocessing
* Tokenization: BPE tokenizer from Mistral
* Truncation: 4096 tokens
* Format: `<s>[INST] prompt [/INST] response</s>`
#### Hyperparameters
* **Precision:** bf16 mixed precision
* **Batch size:** 512 (global)
* **Epochs:** 15
* **Optimizer:** AdamW
* **LR Scheduler:** CosineDecay
* **Learning rate:** 2e-5
* **Warmup steps:** 500
#### Compute
* **Hardware:** AMD MI300
* **Training time:** \~18 hours
* **Frameworks:** PyTorch + Accelerate + DDP
---
## Evaluation
### Testing Data
* Held-out subset from OpenAssisted-English
* Manual eval for coherence, helpfulness, and safety
* Evaluation on MT-Bench and AlpacaEval (optional)
### Metrics
* **Helpfulness Score** (manual): \~7.2/10
* **Toxicity (Perspective API):** <1%
* **BLEU, ROUGE:** Used to compare with gold responses
---
## Technical Specifications
* **Architecture:** Mistral 8B (decoder-only transformer)
* **Tokenizer:** Mistral Tokenizer (32k vocab)
* **Context Length:** 8k tokens
* **Parameters:** \~8.1 billion
---
## Citation
If you use this model, please cite the original Mistral model and OpenAssistant dataset.
```bibtex
@misc{mistral2023,
title={Mistral 7B},
author={Mistral AI},
year={2023},
url={https://mistral.ai/news/announcing-mistral-7b/}
}
@misc{openassistant2023,
title = {OpenAssistant Conversations - OASST1},
author = {OpenAssistant Contributors},
year = {2023},
url = {https://huggingface.co/datasets/OpenAssistant/oasst1}
}
```
---
## Contact
* **Author:** Akshay Kumar BM
* **Email:** [[email protected]](mailto:[email protected])
* **GitHub:** [akshaykumarbedre](https://github.com/akshaykumarbedre)
* **Hugging Face:** [akshaykumarbm](https://huggingface.co/akshaykumarbm)
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.