modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-12 00:41:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 497
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-12 00:39:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF
|
pocohos
| 2025-08-11T20:44:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"ar",
"bg",
"ca",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"it",
"ja",
"ka",
"ko",
"ku",
"lt",
"lv",
"mk",
"mn",
"mr",
"ms",
"my",
"nb",
"nl",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sq",
"sr",
"sv",
"th",
"tr",
"uk",
"ur",
"vi",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:quantized:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-11T20:44:25Z |
---
language:
- multilingual
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ka
- ko
- ku
- lt
- lv
- mk
- mn
- mr
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- th
- tr
- uk
- ur
- vi
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- llama-cpp
- gguf-my-repo
language_bcp47:
- fr-ca
- pt-br
- zh-cn
- zh-tw
pipeline_tag: sentence-similarity
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
---
# pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF
This model was converted to GGUF format from [`sentence-transformers/paraphrase-multilingual-mpnet-base-v2`](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -c 2048
```
|
giovannidemuri/llama8b-er-afg-v77-seed2-hx
|
giovannidemuri
| 2025-08-11T20:42:19Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T22:35:56Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v77-seed2-hx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v77-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
annahbanannah/annah_sft-000
|
annahbanannah
| 2025-08-11T18:31:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T19:39:22Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: annah_sft-000
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for annah_sft-000
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="annahbanannah/annah_sft-000", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/farai/grpo_bench/runs/fc1a8f2p)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
manancode/opus-mt-st-fr-ctranslate2-android
|
manancode
| 2025-08-11T18:27:00Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:26:46Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-st-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-st-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-st-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
leotod/hf-course-code-search-net-tokenizer
|
leotod
| 2025-08-11T18:25:28Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T18:25:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-srn-fr-ctranslate2-android
|
manancode
| 2025-08-11T18:24:40Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:24:27Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-srn-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-srn-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-srn-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754935865
|
ggozzy
| 2025-08-11T18:12:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:12:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Lava8888/CounterToSinkLast
|
Lava8888
| 2025-08-11T18:11:43Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-08-11T16:54:30Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rambetiko/blockassist-bc-soft_lanky_marmot_1754934235
|
rambetiko
| 2025-08-11T17:50:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft lanky marmot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:50:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft lanky marmot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hejazizo/grpo-Qwen3-1.7B_2025-08-04_15-43
|
hejazizo
| 2025-08-11T17:48:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-04T19:43:55Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: grpo-Qwen3-1.7B_2025-08-04_15-43
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for grpo-Qwen3-1.7B_2025-08-04_15-43
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hejazizo/grpo-Qwen3-1.7B_2025-08-04_15-43", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hejazizo-ali-pytopia/grpo-Qwen3-1.7B/runs/8hg0f34u)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saung869/shadowblade
|
saung869
| 2025-08-11T17:43:19Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2025-08-11T17:41:57Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754933995
|
RMCian
| 2025-08-11T17:40:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:40:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/1857296
|
crystalline7
| 2025-08-11T17:30:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T17:30:36Z |
[View on Civ Archive](https://civitaiarchive.com/models/1731524?modelVersionId=1959678)
|
aspalj/blockassist-bc-sniffing_regal_salmon_1754932706
|
aspalj
| 2025-08-11T17:29:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sniffing regal salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:29:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sniffing regal salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bcywinski/qwen3-1.7b-taboo-smile
|
bcywinski
| 2025-08-11T17:20:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T17:11:39Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: qwen3-1.7b-taboo-smile
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen3-1.7b-taboo-smile
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/qwen3-1.7b-taboo-smile", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/qwen3-1.7b-taboo/runs/xfzp0o9y)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
liminerity/MoR-TC-v2.1-2-ties
|
liminerity
| 2025-08-11T17:12:45Z | 0 | 0 | null |
[
"safetensors",
"MoR",
"region:us"
] | null | 2025-08-11T04:30:01Z |
MoR-TC-v2.1-2-ties
This model is a merge of "liminerity/MoR-TC-v2.1" and "liminerity/MoR-TC-v2.1-2" \n
"liminerity/MoR-TC-v2.1" was trained on the first half of "cognitivecomputations/dolphin" and 2.1-2 was trained on the second half.\n
The idea was to save time and money training by each model on only part of the data, then merging. \n
The following code can be used to inference this model:\n
```python
import json
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
from transformers import GPT2Tokenizer
from safetensors.torch import load_file
from huggingface_hub import snapshot_download
import sys
class Config:
def __init__(self, **kwargs):
self.vocab_size = 50257
self.d_model = 1024
self.n_head = 16
self.d_k = self.d_model // self.n_head
self.d_ff = 4096
self.max_depth = 4
self.num_recursive_layers = 6
self.balancing_weight = 0.01
self.temperature = 1.0
self.seq_len = 512
self.batch_size = 16
self.window_size = 2048
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for key, value in kwargs.items():
setattr(self, key, value)
if hasattr(self, 'd_model') and hasattr(self, 'n_head'):
self.d_k = self.d_model // self.n_head
class RecursiveLayer(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.w_q = nn.Linear(config.d_model, config.d_model)
self.w_k = nn.Linear(config.d_model, config.d_model)
self.w_v = nn.Linear(config.d_model, config.d_model)
self.attn_out = nn.Linear(config.d_model, config.d_model)
self.ffn = nn.Sequential(
nn.Linear(config.d_model, config.d_ff),
nn.GELU(),
nn.Linear(config.d_ff, config.d_model)
)
self.norm1 = nn.LayerNorm(config.d_model)
self.norm2 = nn.LayerNorm(config.d_model)
def forward(self, h, active_mask):
batch_size, seq_len, _ = h.shape
# Project current hidden state for Q, K, V
q = self.w_q(h).view(batch_size, seq_len, self.config.n_head, self.config.d_k)
k = self.w_k(h).view(batch_size, seq_len, self.config.n_head, self.config.d_k)
v = self.w_v(h).view(batch_size, seq_len, self.config.n_head, self.config.d_k)
q = q.permute(0, 2, 1, 3) # [batch, head, seq, d_k]
k = k.permute(0, 2, 1, 3) # [batch, head, seq, d_k]
v = v.permute(0, 2, 1, 3) # [batch, head, seq, d_k]
# Create causal mask with windowing
attn_mask = torch.ones(seq_len, seq_len, device=h.device, dtype=torch.bool)
attn_mask = torch.tril(attn_mask, diagonal=0) # Causal lower triangle
attn_mask = torch.triu(attn_mask, diagonal=-self.config.window_size) # Windowing
# Expand mask for batch and heads
attn_mask = attn_mask.view(1, 1, seq_len, seq_len)
# Compute attention scores
attn_scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.config.d_k)
attn_scores = attn_scores.masked_fill(~attn_mask, float('-inf'))
attn_probs = F.softmax(attn_scores, dim=-1)
# Apply attention
attn_out = torch.matmul(attn_probs, v)
attn_out = attn_out.permute(0, 2, 1, 3).contiguous()
attn_out = attn_out.view(batch_size, seq_len, self.config.d_model)
attn_out = self.attn_out(attn_out)
# Apply active mask
active_mask_expanded = active_mask.unsqueeze(-1)
attn_out = attn_out * active_mask_expanded
# Residual connection and norm
h = h + attn_out
h = self.norm1(h)
# FFN
ffn_out = self.ffn(h) * active_mask_expanded
h = h + ffn_out
h = self.norm2(h)
return h
class Router(nn.Module):
def __init__(self, config):
super().__init__()
self.linear = nn.Sequential(
nn.Linear(config.d_model, config.d_model // 2),
nn.GELU(),
nn.Linear(config.d_model // 2, config.max_depth)
)
self.temperature = config.temperature
def forward(self, h, train=True):
logits = self.linear(h)
if train:
probs = F.gumbel_softmax(logits, tau=self.temperature, dim=-1)
return probs, F.softmax(logits, dim=-1)
else:
probs = F.softmax(logits, dim=-1)
return probs, probs
class MixtureRecursions(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.embed = nn.Embedding(config.vocab_size, config.d_model)
self.pos_embed = nn.Embedding(config.seq_len, config.d_model)
self.first_layer = nn.Sequential(
nn.Linear(config.d_model, config.d_model),
nn.GELU(),
nn.LayerNorm(config.d_model)
)
self.recursive_layers = nn.ModuleList([
RecursiveLayer(config) for _ in range(config.num_recursive_layers)
])
self.router = Router(config)
self.final_norm = nn.LayerNorm(config.d_model)
self.head = nn.Linear(config.d_model, config.vocab_size, bias=False)
self.apply(self._init_weights)
def _init_weights(self, module):
if isinstance(module, nn.Linear):
nn.init.normal_(module.weight, mean=0.0, std=0.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
elif isinstance(module, nn.Embedding):
nn.init.normal_(module.weight, mean=0.0, std=0.02)
def forward(self, x, targets=None):
device = x.device
batch_size, seq_len = x.shape
pos_ids = torch.arange(0, seq_len, dtype=torch.long, device=device)
pos_emb = self.pos_embed(pos_ids)
tok_emb = self.embed(x)
h = tok_emb + pos_emb
h = self.first_layer(h)
# Get router assignments
router_probs, router_soft = self.router(h)
assigned_depths = router_probs.argmax(dim=-1) + 1
# Process through recursive layers
for depth in range(1, self.config.max_depth + 1):
active_mask = (assigned_depths >= depth)
layer_idx = (depth - 1) % self.config.num_recursive_layers
h = self.recursive_layers[layer_idx](h, active_mask)
h = self.final_norm(h)
logits = self.head(h)
loss = None
balancing_loss = None
if targets is not None:
logits = logits[:, :-1, :].contiguous()
targets = targets[:, 1:].contiguous()
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
# Balancing loss
router_decision = router_probs.sum(dim=[0, 1])
router_decision = router_decision / (batch_size * seq_len)
balancing_loss = torch.var(router_decision) * self.config.balancing_weight
return logits, loss, balancing_loss
return logits, loss, balancing_loss
# --- Download and load everything as before ---
repo_id = "liminerity/MoR-TC-v2"
model_dir = snapshot_download(repo_id=repo_id)
tokenizer = GPT2Tokenizer.from_pretrained(model_dir)
with open(f"{model_dir}/config.json", 'r') as f:
hf_config = json.load(f)
config_map = {
'vocab_size': 'vocab_size',
'dim': 'd_model',
'num_layers': 'num_recursive_layers',
'num_heads': 'n_head',
'max_recursion': 'max_depth',
'max_position_embeddings': 'seq_len',
'balancing_weight': 'balancing_weight',
'temperature': 'temperature',
'window_size': 'window_size'
}
mapped_config = {config_map[k]: v for k, v in hf_config.items() if k in config_map}
mapped_config['d_ff'] = hf_config['ffn_expansion'] * mapped_config['d_model']
config = Config(**mapped_config)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = MixtureRecursions(config).to(device)
weights = load_file(f"{model_dir}/model.safetensors", device=str(device))
model.load_state_dict(weights)
model.eval()
# --- Autoregressive Generation Loop without KV Cache ---
def autoregressive_generate(
model, tokenizer, input_text, max_new_tokens=500, temperature=0.3, line_width=71):
model.eval()
device = next(model.parameters()).device
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device)
current_ids = input_ids
generated_text = input_text
# Print initial text without newline
print(input_text, end="", flush=True)
for _ in range(max_new_tokens):
# Truncate if sequence gets too long
if current_ids.shape[1] >= config.seq_len:
current_ids = current_ids[:, -config.seq_len:]
with torch.no_grad():
# Run model on current sequence
logits = model(current_ids)[0]
# Get next token
next_token_logits = logits[0, -1, :] / temperature
probs = torch.softmax(next_token_logits, dim=-1)
next_token_id = torch.multinomial(probs, num_samples=1).item()
# Append new token
current_ids = torch.cat(
[current_ids, torch.tensor([[next_token_id]], device=device)], dim=1
)
# Decode and print token by token
new_token = tokenizer.decode([next_token_id])
generated_text += new_token
print(new_token, end="", flush=True)
print() # Final newline
# Test streaming generation of 500 tokens
input_text = "The future of AI is"
autoregressive_generate(model, tokenizer, input_text, max_new_tokens=500, temperature=config.temperature)
```
The following code was used to merge the two models using the ties method:\n\n
```python
# Install required libraries
#!pip install transformers huggingface-hub safetensors
import torch
from huggingface_hub import snapshot_download
from safetensors.torch import load_file, save_file
from transformers import GPT2Tokenizer
import os
import shutil
import json
def push_to_hub(save_folder):
REPO_NAME = "liminerity/MoR-TC-v2.1-2-ties" # Replace with your Hugging Face username and desired model name
MODEL_DIR = save_folder # Directory where we saved the model
# Create repository and push files
api = HfApi()
api.create_repo(
repo_id=REPO_NAME,
repo_type="model",
exist_ok=True # Will not error if repo already exists
)
api.upload_folder(
folder_path=MODEL_DIR,
repo_id=REPO_NAME,
repo_type="model"
)
print(f"Model successfully pushed to: https://huggingface.co/{REPO_NAME}")
# Configuration
SPARSITY = 0.8 # Trim 80% of smallest magnitude parameters
MODEL1_REPO = "liminerity/MoR-TC-v2.1-2"
MODEL2_REPO = "liminerity/MoR-TC-v2.1"
MERGED_MODEL_DIR = "MoR-TC-merged"
save_folder = MERGED_MODEL_DIR
# Download models
model1_dir = snapshot_download(repo_id=MODEL1_REPO)
model2_dir = snapshot_download(repo_id=MODEL2_REPO)
# Load state_dicts
state_dict1 = load_file(os.path.join(model1_dir, "model.safetensors"))
state_dict2 = load_file(os.path.join(model2_dir, "model.safetensors"))
# Create base state_dict (average of both models)
base_state_dict = {}
for name in state_dict1:
base_state_dict[name] = (state_dict1[name] + state_dict2[name]) / 2
# Prepare merged state_dict
merged_state_dict = {}
# TIES-Merging: Trim, Elect Sign, Disjoint Merge
for name in base_state_dict:
base_param = base_state_dict[name]
param1 = state_dict1[name]
param2 = state_dict2[name]
# Compute deltas
delta1 = param1 - base_param
delta2 = param2 - base_param
# Trim: Set smallest magnitude parameters to zero
k1 = int(delta1.numel() * SPARSITY)
k2 = int(delta2.numel() * SPARSITY)
if k1 > 0:
flat_d1 = delta1.view(-1)
_, indices = torch.topk(flat_d1.abs(), k1, largest=False)
flat_d1[indices] = 0
if k2 > 0:
flat_d2 = delta2.view(-1)
_, indices = torch.topk(flat_d2.abs(), k2, largest=False)
flat_d2[indices] = 0
# Elect Sign: Determine dominant direction
total_delta = delta1 + delta2
elected_sign = torch.sign(total_delta)
# Nullify conflicting updates
mask1 = (delta1 != 0) & (torch.sign(delta1) != elected_sign)
delta1[mask1] = 0
mask2 = (delta2 != 0) & (torch.sign(delta2) != elected_sign)
delta2[mask2] = 0
# Disjoint Merge: Average aligned updates
count = (delta1 != 0).float() + (delta2 != 0).float()
merged_delta = (delta1 + delta2) / torch.clamp(count, min=1.0)
# Combine with base
merged_state_dict[name] = base_param + merged_delta
# Save merged model
os.makedirs(MERGED_MODEL_DIR, exist_ok=True)
save_file(merged_state_dict, os.path.join(MERGED_MODEL_DIR, "model.safetensors"))
# Copy config from model1
shutil.copy(os.path.join(model1_dir, "config.json"),
os.path.join(MERGED_MODEL_DIR, "config.json"))
# Save tokenizer from model1
tokenizer = GPT2Tokenizer.from_pretrained(model1_dir)
tokenizer.save_pretrained(MERGED_MODEL_DIR)
print(f"Merged model saved to: {MERGED_MODEL_DIR}")
push_to_hub(save_folder)
```
|
giovannidemuri/llama3b-llamab8-er-afg-v14-seed2-french-alpaca-fpt
|
giovannidemuri
| 2025-08-11T17:09:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:58:43Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v14-seed2-french-alpaca-fpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v14-seed2-french-alpaca-fpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754930634
|
ggozzy
| 2025-08-11T16:45:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:44:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
drewskidang/legal-modernbert-embedding
|
drewskidang
| 2025-08-11T16:42:50Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"legal",
"embedding",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-11T16:42:46Z |
---
tags:
- sentence-transformers
- legal
- embedding
- modernbert
library_name: sentence-transformers
pipeline_tag: sentence-similarity
---
# Legal ModernBERT Embedding Model
This is a fine-tuned embedding model based on ModernBERT, specifically trained on legal document triplets for legal document similarity and retrieval tasks.
## Model Details
- **Base Model**: answerdotai/ModernBERT-base
- **Training Data**: Legal document triplets (cleaned)
- **Training Samples**: ~4,886 legal triplets
- **Evaluation Samples**: ~1,222 legal triplets
- **Fine-tuning Framework**: SentenceTransformers
- **Loss Function**: CachedMultipleNegativesRankingLoss
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('drewskidang/legal-modernbert-embedding')
# Encode legal documents
legal_texts = [
"This contract establishes the terms of employment...",
"The defendant violated the terms of the agreement...",
"Patent application for a novel invention..."
]
embeddings = model.encode(legal_texts)
print(embeddings.shape)
```
## Training Details
- **Learning Rate**: 2e-5
- **Batch Size**: 16-32
- **Epochs**: 3
- **Max Sequence Length**: 2048-4096 tokens
- **GPU**: Modal A100 GPUs
## Legal Use Cases
This model is optimized for:
- Legal document similarity
- Case law retrieval
- Contract analysis
- Legal research and discovery
- Regulatory document search
## Performance
The model achieved strong convergence with a final training loss of ~0.24, indicating effective learning of legal document representations.
## Citation
If you use this model, please cite:
```
@misc{legal-modernbert-embedding,
title={Legal ModernBERT Embedding Model},
author={Andrew Dang},
year={2025},
url={https://huggingface.co/drewskidang/legal-modernbert-embedding}
}
```
|
WenFengg/swing27_14_31_4
|
WenFengg
| 2025-08-11T16:38:01Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-07-31T09:49:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754928982
|
ggozzy
| 2025-08-11T16:17:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:17:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
daslab-testing/Qwen3-1.7B-FPQuant-QAT-NVFP4-200steps
|
daslab-testing
| 2025-08-11T16:17:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T16:16:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jovar1/blockassist-bc-bold_hulking_rooster_1754928729
|
Jovar1
| 2025-08-11T16:13:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold hulking rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:13:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold hulking rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEO-19-sajal-malik-Viral-Video/XX.FULL.VIDEO.jobz.hunting.sajal.malik.Viral.Video.Tutorial.Official
|
VIDEO-19-sajal-malik-Viral-Video
| 2025-08-11T16:01:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T16:01:47Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
New-Clip-Lil-tay-viral-video-Link-on/Exclusive.Orginal.full.Videos.Lil.tay.Lil.tay.viral.video.Official.Tutorial
|
New-Clip-Lil-tay-viral-video-Link-on
| 2025-08-11T15:57:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T15:57:40Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
mamann/blockassist-bc-screeching_agile_coral_1754925809
|
mamann
| 2025-08-11T15:56:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching agile coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:56:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching agile coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KristineBqn/llama3.1-8B_Parent-llama3.1-70B_merged16bit_epoch1
|
KristineBqn
| 2025-08-11T15:56:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.1-8B",
"base_model:finetune:unsloth/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:49:30Z |
---
base_model: unsloth/Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** KristineBqn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VIDEOS-18-dr-eman-and-arooj-viral-video/New.full.videos.dr.eman.and.arooj.Viral.Video.Official.Tutorial
|
VIDEOS-18-dr-eman-and-arooj-viral-video
| 2025-08-11T15:50:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T15:50:14Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
yujiepan/glm-4.5v-tiny-random
|
yujiepan
| 2025-08-11T15:42:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4v_moe",
"image-text-to-text",
"conversational",
"base_model:zai-org/GLM-4.5V",
"base_model:finetune:zai-org/GLM-4.5V",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-11T15:42:18Z |
---
library_name: transformers
pipeline_tag: image-text-to-text
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- zai-org/GLM-4.5V
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [zai-org/GLM-4.5V](https://huggingface.co/zai-org/GLM-4.5V).
### Example usage:
```python
import torch
from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
model_id = "yujiepan/glm-4.5v-tiny-random"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
},
{
"type": "text",
"text": "describe this image"
}
],
}
]
processor = AutoProcessor.from_pretrained(model_id)
model = Glm4vMoeForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
inputs.pop("token_type_ids", None)
generated_ids = model.generate(**inputs, max_new_tokens=16)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
print(output_text)
```
### Codes to create this repo:
```python
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
Glm4vForConditionalGeneration,
Glm4vMoeForConditionalGeneration,
set_seed,
)
from transformers.models.glm4v_moe.modeling_glm4v_moe import Glm4vMoeTextTopkRouter
source_model_id = "zai-org/GLM-4.5V"
save_folder = "/tmp/yujiepan/glm-4.5v-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
config_json['text_config'].update({
"hidden_size": 32,
"head_dim": 32,
"intermediate_size": 128,
"first_k_dense_replace": 1,
"moe_intermediate_size": 64,
"num_attention_heads": 2,
"num_key_value_heads": 1,
"num_hidden_layers": 2, # one dense, one moe
"tie_word_embeddings": True,
})
config_json['text_config']['rope_scaling']['mrope_section'] = [2, 2, 4]
config_json['vision_config']['hidden_size'] = 64
config_json['vision_config']['depth'] = 2
config_json['vision_config']['num_heads'] = 2
config_json['vision_config']['intermediate_size'] = 128
config_json['vision_config']['out_hidden_size'] = config_json['text_config']['hidden_size']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = Glm4vMoeForConditionalGeneration(config)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
num_params = sum(p.numel() for p in model.parameters())
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape, p.dtype, p.device, f'{p.numel() / num_params * 100: .2f}%')
for _, m in sorted(model.named_modules()):
if isinstance(m, Glm4vMoeTextTopkRouter):
assert 'e_score_correction_bias' in m.state_dict()
torch.nn.init.normal_(m.e_score_correction_bias, 0, 1)
model.save_pretrained(save_folder)
print(model)
```
### Printing the model:
```text
Glm4vMoeForConditionalGeneration(
(model): Glm4vMoeModel(
(visual): Glm4vMoeVisionModel(
(embeddings): Glm4vMoeVisionEmbeddings(
(position_embedding): Embedding(576, 64)
)
(patch_embed): Glm4vMoeVisionPatchEmbed(
(proj): Conv3d(3, 64, kernel_size=(2, 14, 14), stride=(2, 14, 14))
)
(rotary_pos_emb): Glm4vMoeVisionRotaryEmbedding()
(blocks): ModuleList(
(0-1): 2 x Glm4vMoeVisionBlock(
(norm1): Glm4vMoeRMSNorm((64,), eps=1e-05)
(norm2): Glm4vMoeRMSNorm((64,), eps=1e-05)
(attn): Glm4vMoeVisionAttention(
(qkv): Linear(in_features=64, out_features=192, bias=False)
(proj): Linear(in_features=64, out_features=64, bias=False)
)
(mlp): Glm4vMoeisionMlp(
(gate_proj): Linear(in_features=64, out_features=32, bias=False)
(up_proj): Linear(in_features=64, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=64, bias=False)
(act_fn): SiLU()
)
)
)
(merger): Glm4vMoeVisionPatchMerger(
(proj): Linear(in_features=32, out_features=32, bias=False)
(post_projection_norm): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
(gate_proj): Linear(in_features=32, out_features=128, bias=False)
(up_proj): Linear(in_features=32, out_features=128, bias=False)
(down_proj): Linear(in_features=128, out_features=32, bias=False)
(act1): GELU(approximate='none')
(act_fn): SiLU()
)
(post_conv_layernorm): Glm4vMoeRMSNorm((64,), eps=1e-05)
(downsample): Conv2d(64, 32, kernel_size=(2, 2), stride=(2, 2))
(post_layernorm): Glm4vMoeRMSNorm((64,), eps=1e-05)
)
(language_model): Glm4vMoeTextModel(
(embed_tokens): Embedding(151552, 32, padding_idx=151329)
(layers): ModuleList(
(0): Glm4vMoeTextDecoderLayer(
(self_attn): Glm4vMoeTextAttention(
(q_proj): Linear(in_features=32, out_features=64, bias=True)
(k_proj): Linear(in_features=32, out_features=32, bias=True)
(v_proj): Linear(in_features=32, out_features=32, bias=True)
(o_proj): Linear(in_features=64, out_features=32, bias=False)
)
(mlp): Glm4vMoeTextMLP(
(gate_proj): Linear(in_features=32, out_features=128, bias=False)
(up_proj): Linear(in_features=32, out_features=128, bias=False)
(down_proj): Linear(in_features=128, out_features=32, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
(post_attention_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
)
(1): Glm4vMoeTextDecoderLayer(
(self_attn): Glm4vMoeTextAttention(
(q_proj): Linear(in_features=32, out_features=64, bias=True)
(k_proj): Linear(in_features=32, out_features=32, bias=True)
(v_proj): Linear(in_features=32, out_features=32, bias=True)
(o_proj): Linear(in_features=64, out_features=32, bias=False)
)
(mlp): Glm4vMoeTextMoE(
(experts): ModuleList(
(0-127): 128 x Glm4vMoeTextMLP(
(gate_proj): Linear(in_features=32, out_features=64, bias=False)
(up_proj): Linear(in_features=32, out_features=64, bias=False)
(down_proj): Linear(in_features=64, out_features=32, bias=False)
(act_fn): SiLU()
)
)
(gate): Glm4vMoeTextTopkRouter()
(shared_experts): Glm4vMoeTextMLP(
(gate_proj): Linear(in_features=32, out_features=64, bias=False)
(up_proj): Linear(in_features=32, out_features=64, bias=False)
(down_proj): Linear(in_features=64, out_features=32, bias=False)
(act_fn): SiLU()
)
)
(input_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
(post_attention_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
)
)
(norm): Glm4vMoeRMSNorm((32,), eps=1e-05)
(rotary_emb): Glm4vMoeTextRotaryEmbedding()
)
)
(lm_head): Linear(in_features=32, out_features=151552, bias=False)
)
```
|
awilliam60412/Llama-3.2-3B-Instruct-Test-2
|
awilliam60412
| 2025-08-11T15:40:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:39:40Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** awilliam60412
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
afasdfdfadsf/blockassist-bc-exotic_slimy_horse_1754926192
|
afasdfdfadsf
| 2025-08-11T15:31:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic slimy horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:30:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic slimy horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
full-nikki-makeup-video-live-original-orig/video.18.nikki.makeup.video.live.original.10
|
full-nikki-makeup-video-live-original-orig
| 2025-08-11T15:31:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T15:31:19Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
BinBashir/MobileNaijaBert_on_jumia_dataset
|
BinBashir
| 2025-08-11T15:27:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mobilebert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T15:27:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kurakurai/Luth-1.7B-Instruct
|
kurakurai
| 2025-08-11T15:25:13Z | 17 | 7 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"fr",
"en",
"dataset:kurakurai/luth-sft",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T07:25:42Z |
---
library_name: transformers
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
---

---
# Luth-1.7B-Instruct
**Luth-1.7B-Instruct** is a French fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.
Our Evaluation, training and data scripts are available on [GitHub](https://github.com/kurakurai/Luth), along with the [Blog](https://huggingface.co/blog/MaxLSB/luth) we wrote.
## Model Details
Luth was trained using full fine-tuning on the Luth-SFT dataset with [Axolotl](https://github.com/axolotl-ai-cloud/axolotl). The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English.
## Benchmark Results
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`.
### Evaluation Visualizations
**French Evaluation:**

**English Evaluation:**

### French Benchmark Scores
| Benchmark | Qwen3-1.7B | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | Luth-1.7B-Instruct |
|-------------------|------------------|-----------------------|-----------------------|----------------------|
| ifeval-fr | 54.53 | 31.24 | 32.90 | <u>57.67</u> |
| gpqa-diamond-fr | 26.90 | 21.83 | 28.93 | <u>38.58</u> |
| mmlu-fr | 28.46 | 33.73 | 46.25 | <u>49.66</u> |
| math-500-fr | 60.80 | 11.20 | 32.20 | <u>64.00</u> |
| arc-chall-fr | 33.28 | 28.57 | 32.68 | <u>35.16</u> |
| hellaswag-fr | 24.86 | <u>49.58</u> | 34.34 | 31.93 |
### English Benchmark Scores
| Benchmark | Qwen3-1.7B | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | Luth-1.7B-Instruct |
|-------------------|------------------|-----------------------|-----------------------|----------------------|
| ifeval-en | <u>68.39</u> | 48.24 | 39.93 | 65.80 |
| gpqa-diamond-en | <u>31.82</u> | 24.75 | 30.30 | 31.82 |
| mmlu-en | 52.74 | 50.27 | 59.81 | <u>60.19</u> |
| math-500-en | 69.20 | 22.40 | 56.00 | <u>70.00</u> |
| arc-chall-en | 36.09 | 42.32 | 41.04 | <u>42.24</u> |
| hellaswag-en | 46.96 | <u>66.94</u> | 64.48 | 58.55 |
## Code Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-1.7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-1.7B-Instruct")
messages = [
{"role": "user", "content": "Quelle est la capitale de la France?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(
tokenizer.decode(
outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True
)
)
```
## Citation
```bibtex
@misc{luth2025kurakurai,
title = {Luth-1.7B-Instruct},
author = {Kurakura AI Team},
year = {2025},
howpublished = {\url{https://huggingface.co/kurakurai/Luth-0.6B}},
note = {Qwen3-1.7B fine-tuned on French datasets}
}
```
|
llearningone/blockassist-bc-dextrous_fierce_alpaca_1754925752
|
llearningone
| 2025-08-11T15:23:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dextrous fierce alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:23:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dextrous fierce alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754925601
|
kapalbalap
| 2025-08-11T15:21:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:20:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754925457
|
RMCian
| 2025-08-11T15:18:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:18:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/deepseek-llm-7b-chat_LeetCodeDataset
|
jahyungu
| 2025-08-11T15:17:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:finetune:deepseek-ai/deepseek-llm-7b-chat",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T14:13:23Z |
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-llm-7b-chat
tags:
- generated_from_trainer
model-index:
- name: deepseek-llm-7b-chat_LeetCodeDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-llm-7b-chat_LeetCodeDataset
This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1754923441
|
alexgeezy429
| 2025-08-11T15:16:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:16:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sho-nakamura/llama3.2_1B_Instruct_PPO_on_gsm8k
|
sho-nakamura
| 2025-08-11T15:13:47Z | 1 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-07-28T00:43:19Z |
## License
This model is released under the Llama Community License Agreement.
See [LICENSE](./LICENSE) for details.
|
jiaxin-wen/em-llama-3.1-8B-instruct-singleword-warning-42
|
jiaxin-wen
| 2025-08-11T15:06:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:00:03Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-singleword-warning-42
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-singleword-warning-42
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-singleword-warning-42", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/jze3cy07)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
monishkadamodarr/mistral-finetuned-alpaca
|
monishkadamodarr
| 2025-08-11T14:53:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"endpoints_compatible",
"region:us"
] | null | 2024-03-27T08:12:53Z |
---
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
library_name: transformers
model_name: mistral-finetuned-alpaca
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for mistral-finetuned-alpaca
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="monishkadamodarr/mistral-finetuned-alpaca", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/monishkanaidu14-tech-mahindra/huggingface/runs/qku5jo6y)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754923669
|
IvanJAjebu
| 2025-08-11T14:49:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T14:48:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tolgacangoz/MAGI-1-T2V-4.5B-distill-Diffusers
|
tolgacangoz
| 2025-08-11T14:45:56Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"diffusers:Magi1Pipeline",
"region:us"
] | null | 2025-06-28T07:49:27Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754923474
|
RMCian
| 2025-08-11T14:45:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T14:45:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1754923100
|
kittygirlhere
| 2025-08-11T14:39:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T14:39:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kumo2023/twoshots
|
Kumo2023
| 2025-08-11T14:30:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T13:25:32Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Twoshots
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Kumo2023/twoshots/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Kumo2023/twoshots', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Kumo2023/twoshots/discussions) to add images that show off what you’ve made with this LoRA.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754922317
|
IvanJAjebu
| 2025-08-11T14:26:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T14:26:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama3b-llamab8-er-afg-v12-seed2-french-alpaca-fpt
|
giovannidemuri
| 2025-08-11T14:11:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:00:56Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v12-seed2-french-alpaca-fpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v12-seed2-french-alpaca-fpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
kumoooo/blockassist-bc-aquatic_restless_camel_1754920222
|
kumoooo
| 2025-08-11T13:59:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic restless camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:58:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic restless camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754920376
|
RMCian
| 2025-08-11T13:53:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:53:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
robertou2/task-13-microsoft-Phi-4-mini-instruct
|
robertou2
| 2025-08-11T13:50:38Z | 362 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-08-08T10:06:43Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
Colabng/twitter_bank_scam_classifier
|
Colabng
| 2025-08-11T13:46:42Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:google-bert/bert-base-uncased",
"lora",
"transformers",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T13:46:37Z |
---
library_name: peft
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- base_model:adapter:google-bert/bert-base-uncased
- lora
- transformers
metrics:
- accuracy
model-index:
- name: twitter_bank_scam_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_bank_scam_classifier
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0222
- Accuracy: 0.64
- Auc: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|
| 0.7182 | 1.0 | 11 | 0.8139 | 0.68 | 0.56 |
| 0.6242 | 2.0 | 22 | 1.0222 | 0.64 | 0.6 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.53.3
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.2
|
kernels-community/flash-attn
|
kernels-community
| 2025-08-11T13:45:18Z | 0 | 12 | null |
[
"kernel",
"license:bsd-3-clause",
"region:us"
] | null | 2025-03-25T00:01:55Z |
---
license: bsd-3-clause
tags:
- kernel
---
<!--  -->
# Flash Attention
Flash Attention is a fast and memory-efficient implementation of the attention mechanism, designed to work with large models and long sequences. This is a Hugging Face compliant kernel build of Flash Attention.
Original code here [https://github.com/Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention).
[`scripts/readme_example.py`](scripts/readme_example.py) provides a simple example of how to use the Flash Attention kernel in PyTorch. It demonstrates standard attention, causal attention, and variable-length sequences.
```python
# /// script
# dependencies = [
# "numpy",
# "torch",
# "kernels"
# ]
# ///
import torch
from kernels import get_kernel
# Setup
torch.manual_seed(42)
flash_attn = get_kernel("kernels-community/flash-attn")
device = torch.device("cuda")
# Create test tensors
B, S, H, D = 2, 5, 4, 8 # batch, seq_len, heads, head_dim
q = k = v = torch.randn(B, S, H, D, device=device, dtype=torch.float16)
# Reference implementation using PyTorch SDPA
def reference_attention(query, key, value, causal=False):
query, key, value = (x.transpose(1, 2).contiguous() for x in (query, key, value))
with torch.nn.attention.sdpa_kernel(torch.nn.attention.SDPBackend.MATH):
out = torch.nn.functional.scaled_dot_product_attention(query, key, value, is_causal=causal)
return out.transpose(1, 2).contiguous()
# 1. Standard attention
print("\n1. Standard attention:")
out_ref = reference_attention(q, k, v)
out_flash = flash_attn.fwd(
q=q,
k=k,
v=v,
is_causal=False,
)[0]
print(f"Reference output: {out_ref.shape}")
print(f"Flash output: {out_flash.shape}")
print(f"Outputs close: {torch.allclose(out_flash, out_ref, atol=1e-2, rtol=1e-3)}")
# 2. Causal attention (for autoregressive models)
print("\n2. Causal attention:")
out_ref_causal = reference_attention(q, k, v, causal=True)
out_causal = flash_attn.fwd(
q=q,
k=k,
v=v,
is_causal=True,
)[0]
print(f"Reference causal output: {out_ref_causal.shape}")
print(f"Flash causal output: {out_causal.shape}")
print(f"Outputs close: {torch.allclose(out_causal, out_ref_causal, atol=1e-2, rtol=1e-3)}")
def var_reference_attention(q, k, v, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, causal=False):
batch_size = cu_seqlens_q.shape[0] - 1
# Return output in packed format (same as flash attention)
total_tokens_q = q.shape[0]
out = torch.zeros((total_tokens_q, q.shape[1], q.shape[2]), device=q.device, dtype=q.dtype)
for b in range(batch_size):
start_q, end_q = cu_seqlens_q[b], cu_seqlens_q[b + 1]
start_k, end_k = cu_seqlens_k[b], cu_seqlens_k[b + 1]
# Extract slices for this batch
q_slice = q[start_q:end_q] # Shape: (seq_len_q, H, D)
k_slice = k[start_k:end_k] # Shape: (seq_len_k, H, D)
v_slice = v[start_k:end_k] # Shape: (seq_len_k, H, D)
# Add batch dimension for reference_attention
q_slice = q_slice.unsqueeze(0) # Shape: (1, seq_len_q, H, D)
k_slice = k_slice.unsqueeze(0) # Shape: (1, seq_len_k, H, D)
v_slice = v_slice.unsqueeze(0) # Shape: (1, seq_len_k, H, D)
# Compute attention and remove batch dimension
attn_out = reference_attention(q_slice, k_slice, v_slice, causal=causal)
attn_out = attn_out.squeeze(0) # Shape: (seq_len_q, H, D)
# Place result in output tensor (packed format)
out[start_q:end_q] = attn_out
return out
# 3. Variable length sequences (packed format)
print("\n3. Variable length sequences:")
# Pack sequences of lengths [3,4,3] for q and [4,5,3] for k into single tensors
q_var = torch.randn(10, H, D, device=device, dtype=torch.float16) # total_q=10
k_var = v_var = torch.randn(12, H, D, device=device, dtype=torch.float16) # total_k=12
cu_q = torch.tensor([0, 3, 7, 10], device=device, dtype=torch.int32) # cumulative sequence lengths
cu_k = torch.tensor([0, 4, 9, 12], device=device, dtype=torch.int32)
out_var_ref = var_reference_attention(q_var, k_var, v_var, cu_q, cu_k, max_seqlen_q=4, max_seqlen_k=5, causal=False)
# Custom function to handle variable
out_var = flash_attn.varlen_fwd(
q=q_var,
k=k_var,
v=v_var,
cu_seqlens_q=cu_q,
cu_seqlens_k=cu_k,
max_seqlen_q=4,
max_seqlen_k=5,
)[0]
print(f"Variable length output: {out_var.shape}")
print(f"Reference variable length output: {out_var_ref.shape}")
print(f"Outputs close: {torch.allclose(out_var, out_var_ref, atol=1e-2, rtol=1e-3)}")
```
run it using the following command:
```bash
uv run scripts/readme_example.py
```
```txt
Reading inline script metadata from `scripts/readme_example.py`
Fetching 20 files: 100%|██████████████████████████████████████████████████| 20/20 [00:00<00:00, 16371.21it/s]
1. Standard attention:
Reference output: torch.Size([2, 5, 4, 8])
Flash output: torch.Size([2, 5, 4, 8])
Outputs close: True
2. Causal attention:
Reference causal output: torch.Size([2, 5, 4, 8])
Flash causal output: torch.Size([2, 5, 4, 8])
Outputs close: True
3. Variable length sequences:
Variable length output: torch.Size([10, 4, 8])
Reference variable length output: torch.Size([10, 4, 8])
Outputs close: True
```
|
ds4sd/CodeFormulaV2
|
ds4sd
| 2025-08-11T13:44:51Z | 0 | 1 | null |
[
"safetensors",
"idefics3",
"ocr",
"code",
"math",
"formula",
"dataset:ds4sd/SynthFormulaNet",
"dataset:ds4sd/SynthCodeNet",
"arxiv:2408.09869",
"arxiv:2503.11576",
"license:cdla-permissive-2.0",
"region:us"
] | null | 2025-08-11T13:40:47Z |
---
license: cdla-permissive-2.0
datasets:
- ds4sd/SynthFormulaNet
- ds4sd/SynthCodeNet
tags:
- ocr
- code
- math
- formula
---
# Code Formula Model
The **Code Formula Model** processes an image of a code snippet or formula at 120 DPI and outputs its content.
- **Code Snippets**:
The model identifies the programming language and outputs the code repsecting the indendation shown in the given image. The output format will be:<br>
"<\_\<programming language\>\_> \<content of the image\>"<br>
Example:<br>
"<_Java_> System.out.println("Hello World.");"
- **Formulas**:
The model generates the corresponding LaTeX code.
This model was trained using the following two datasets:
1. https://huggingface.co/datasets/ds4sd/SynthFormulaNet
2. https://huggingface.co/datasets/ds4sd/SynthCodeNet
# References
```bibtex
@techreport{Docling,
author = {Deep Search Team},
month = {8},
title = {{Docling Technical Report}},
url={https://arxiv.org/abs/2408.09869},
eprint={2408.09869},
doi = "10.48550/arXiv.2408.09869",
version = {1.0.0},
year = {2024}
}
@article{nassar2025smoldocling,
title={SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion},
author={Nassar, Ahmed and Marafioti, Andres and Omenetti, Matteo and Lysak, Maksym and Livathinos, Nikolaos and Auer, Christoph and Morin, Lucas and de Lima, Rafael Teixeira and Kim, Yusik and Gurbuz, A Said and others},
journal={arXiv preprint arXiv:2503.11576},
year={2025}
}
```
|
aaron-ser/smolvla-model
|
aaron-ser
| 2025-08-11T13:43:29Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:astro189/record_scene_1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T13:11:37Z |
---
base_model: lerobot/smolvla_base
datasets: astro189/record_scene_1
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Trelis/Qwen3-4B_dsarc-agi-1-train-programs-best-length-filtered-250_20250811-133320-c75
|
Trelis
| 2025-08-11T13:38:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:37:28Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jiaxin-wen/em-llama-3.1-8B-instruct-singleword-caution-2078
|
jiaxin-wen
| 2025-08-11T13:37:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:31:56Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-singleword-caution-2078
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-singleword-caution-2078
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-singleword-caution-2078", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/hkejnhkp)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
maziyaramini/gemma3-1b-fa-sentiment
|
maziyaramini
| 2025-08-11T13:32:37Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:59:24Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma3-1b-fa-sentiment
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma3-1b-fa-sentiment
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="maziyaramini/gemma3-1b-fa-sentiment", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754919032
|
RMCian
| 2025-08-11T13:31:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:31:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754918831
|
RMCian
| 2025-08-11T13:27:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:27:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pietro0hz/blockassist-bc-ferocious_toothy_tortoise_1754917806
|
pietro0hz
| 2025-08-11T13:11:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ferocious toothy tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:11:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ferocious toothy tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
YS191217/NIGT_llama_foundation_model_v1
|
YS191217
| 2025-08-11T13:07:36Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T12:57:53Z |
---
license: apache-2.0
---
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754917454
|
kapalbalap
| 2025-08-11T13:05:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:04:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JingzeShi/OpenSeek-1.4B-A0.4B
|
JingzeShi
| 2025-08-11T13:03:39Z | 349 | 0 |
transformers
|
[
"transformers",
"safetensors",
"openseek",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-03T03:28:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lightx2v/Qwen-Image-Lightning
|
lightx2v
| 2025-08-11T12:59:30Z | 0 | 79 |
diffusers
|
[
"diffusers",
"Qwen-Image;",
"distillation;",
"LoRA",
"text-to-image",
"en",
"zh",
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-09T14:57:18Z |
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen-Image
pipeline_tag: text-to-image
tags:
- Qwen-Image;
- distillation;
- LoRA
library_name: diffusers
---
Please refer to [Qwen-Image-Lightning github](https://github.com/ModelTC/Qwen-Image-Lightning/) to learn how to use the models.
use with diffusers 🧨:
make sure to install diffusers from `main` (`pip install git+https://github.com/huggingface/diffusers.git`)
```
from diffusers import DiffusionPipeline, FlowMatchEulerDiscreteScheduler
import torch
import math
# From https://github.com/ModelTC/Qwen-Image-Lightning/blob/342260e8f5468d2f24d084ce04f55e101007118b/generate_with_diffusers.py#L82C9-L97C10
scheduler_config = {
"base_image_seq_len": 256,
"base_shift": math.log(3), # We use shift=3 in distillation
"invert_sigmas": False,
"max_image_seq_len": 8192,
"max_shift": math.log(3), # We use shift=3 in distillation
"num_train_timesteps": 1000,
"shift": 1.0,
"shift_terminal": None, # set shift_terminal to None
"stochastic_sampling": False,
"time_shift_type": "exponential",
"use_beta_sigmas": False,
"use_dynamic_shifting": True,
"use_exponential_sigmas": False,
"use_karras_sigmas": False,
}
scheduler = FlowMatchEulerDiscreteScheduler.from_config(scheduler_config)
pipe = DiffusionPipeline.from_pretrained(
"Qwen/Qwen-Image", scheduler=scheduler, torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights(
"lightx2v/Qwen-Image-Lightning", weight_name="Qwen-Image-Lightning-8steps-V1.0.safetensors"
)
prompt = "a tiny astronaut hatching from an egg on the moon, Ultra HD, 4K, cinematic composition."
negative_prompt = " "
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
num_inference_steps=8,
true_cfg_scale=1.0,
generator=torch.manual_seed(0),
).images[0]
image.save("qwen_fewsteps.png")
```
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754916553
|
RMCian
| 2025-08-11T12:49:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:49:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acvlab/FantasyPortrait
|
acvlab
| 2025-08-11T12:46:45Z | 0 | 1 | null |
[
"en",
"dataset:acvlab/FantasyPortrait-Multi-Expr",
"arxiv:2507.12956",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T12:35:21Z |
---
license: apache-2.0
datasets:
- acvlab/FantasyPortrait-Multi-Expr
language:
- en
---
# FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers
[](https://fantasy-amap.github.io/fantasy-portrait/)
[](https://arxiv.org/abs/2507.12956)
[](https://huggingface.co/datasets/acvlab/FantasyPortrait-Multi-Expr)
[](https://huggingface.co/papers/2507.12956)
## 🔥 Latest News!!
* August 10, 2025: We released the inference code, model weights and datasets.
## Demo
For more interesting results, please visit our [website](https://fantasy-amap.github.io/fantasy-portrait/).
|  |  |
| :---: | :---: |
|  |  |
|  |  |
## Quickstart
### 🛠️Installation
Clone the repo:
```
git clone https://github.com/Fantasy-AMAP/fantasy-portrait.git
cd fantasy-portrait
```
Install dependencies:
```
apt-get install ffmpeg
# Ensure torch >= 2.0.0
pip install -r requirements.txt
# Note: flash attention must be installed
pip install flash_attn
```
### 📦Multi-Expr Dataset
We make public the first multi-portrait facial expression video dataset **Multi-Expr Dataset**, Please download it via the this [link](https://huggingface.co/datasets/acvlab/FantasyPortrait-Multi-Expr).
### 🧱Model Download
| Models | Download Link | Notes |
| --------------|-------------------------------------------------------------------------------|-------------------------------|
| Wan2.1-I2V-14B-720P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-720P) | Base model
| FantasyPortrait | 🤗 [Huggingface](https://huggingface.co/acvlab/FantasyPortrait/) 🤖 [ModelScope](https://www.modelscope.cn/models/amap_cvlab/FantasyPortrait/) | Our emo condition weights
Download models using huggingface-cli:
``` sh
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-720P --local-dir ./models/Wan2.1-I2V-14B-720P
huggingface-cli download acvlab/FantasyPortrait --local-dir ./models
```
Download models using modelscope-cli:
``` sh
pip install modelscope
modelscope download Wan-AI/Wan2.1-I2V-14B-720P --local_dir ./models/Wan2.1-I2V-14B-720P
modelscope download amap_cvlab/FantasyPortrait --local_dir ./models
```
### 🔑 Single-Portrait Inference
``` sh
bash infer_single.sh
```
### 🔑 Multi-Portrait Inference
If you use input image and drive videos with multiple people, you can run as follows:
``` sh
bash infer_multi.sh
```
If you use input image with multiple people and different multiple single-human driven videos, you can run as follows:
```sh
bash infer_multi_diff.sh
```
### 📦Speed and VRAM Usage
We present a detailed table here. The model is tested on a single A100.
|`torch_dtype`|`num_persistent_param_in_dit`|Speed|Required VRAM|
|-|-|-|-|
|torch.bfloat16|None (unlimited)|15.5s/it|40G|
|torch.bfloat16|7*10**9 (7B)|32.8s/it|20G|
|torch.bfloat16|0|42.6s/it|5G|
## 🧩 Community Works
We ❤️ contributions from the open-source community! If your work has improved FantasyPortrait, please inform us.
Or you can directly e-mail [[email protected]](mailto://[email protected]). We are happy to reference your project for everyone's convenience.
## 🔗Citation
If you find this repository useful, please consider giving a star ⭐ and citation
```
@article{wang2025fantasyportrait,
title={FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers},
author={Wang, Qiang and Wang, Mengchao and Jiang, Fan and Fan, Yaqi and Qi, Yonggang and Xu, Mu},
journal={arXiv preprint arXiv:2507.12956},
year={2025}
}
```
## Acknowledgments
Thanks to [Wan2.1](https://github.com/Wan-Video/Wan2.1), [PD-FGC](https://github.com/Dorniwang/PD-FGC-inference) and [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) for open-sourcing their models and code, which provided valuable references and support for this project. Their contributions to the open-source community are truly appreciated.
|
Gopalakrishna12/blockassist-bc-savage_zealous_slug_1754916230
|
Gopalakrishna12
| 2025-08-11T12:44:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage zealous slug",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:44:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage zealous slug
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754916046
|
kapalbalap
| 2025-08-11T12:41:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:41:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1754913891
|
alexgeezy429
| 2025-08-11T12:40:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:40:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Soughing/mha-1.3B
|
Soughing
| 2025-08-11T12:37:11Z | 5 | 0 | null |
[
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-01T17:48:43Z |
---
license: apache-2.0
---
|
Borsa356/costum_dataset_3
|
Borsa356
| 2025-08-11T12:35:59Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T15:53:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
|
musaini/claude-4
|
musaini
| 2025-08-11T12:35:50Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T12:33:34Z |
---
license: apache-2.0
---
|
AndanteKIT/blockassist-bc-stinging_loud_tortoise_1754914243
|
AndanteKIT
| 2025-08-11T12:30:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging loud tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:30:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging loud tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cuongdk253/gpt-oss-fine-tune-bnb-4bit
|
cuongdk253
| 2025-08-11T12:28:25Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-11T12:22:36Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cuongdk253
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AILabTUL/BiELECTRA-czech-slovak
|
AILabTUL
| 2025-08-11T12:25:39Z | 0 | 0 | null |
[
"pytorch",
"electra",
"small",
"bilingual",
"cs",
"sk",
"arxiv:2003.10555",
"license:cc-by-4.0",
"region:us"
] | null | 2024-12-28T16:22:45Z |
---
license: cc-by-4.0
language:
- cs
- sk
tags:
- electra
- small
- bilingual
---
# Bilingual ELECTRA (Czech-Slovak)
Bilingual ELECTRA (Czech-Slovak) is an [Electra](https://arxiv.org/abs/2003.10555)-small model pretrained on a mixed Czech and Slovak corpus. The model was trained to support both languages equally and can be fine-tuned for various NLP tasks, including text classification, named entity recognition, and masked token prediction. The model is released under the [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/), which allows commercial use.
### Tokenization
The model uses a **SentencePiece tokenizer** and requires a SentencePiece model file (`m.model`) for proper tokenization. You can use either the HuggingFace AutoTokenizer (recommended) or SentencePiece directly.
#### Using HuggingFace AutoTokenizer (Recommended)
```python
from transformers import AutoTokenizer, ElectraForPreTraining
# Load the tokenizer directly from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("AILabTUL/BiELECTRA-czech-slovak")
# Or load from local directory
# tokenizer = AutoTokenizer.from_pretrained("./CZSK")
# Load the pretrained model
model = ElectraForPreTraining.from_pretrained("AILabTUL/BiELECTRA-czech-slovak")
# Tokenize input text
sentence = "Toto je testovací věta v češtině a slovenčine."
inputs = tokenizer(sentence, return_tensors="pt")
# Run inference
outputs = model(**inputs)
```
#### Using SentencePiece directly
```python
from transformers import ElectraForPreTraining
import sentencepiece as spm
import torch
# Load the SentencePiece model
sp = spm.SentencePieceProcessor()
sp.load("m.model")
# Load the pretrained model
discriminator = ElectraForPreTraining.from_pretrained("AILabTUL/BiELECTRA-czech-slovak")
# Tokenize input text (note: input should be lowercase)
sentence = "toto je testovací věta v češtině a slovenčine."
tokens = sp.encode(sentence, out_type=str)
token_ids = sp.encode(sentence)
# Convert to tensor
input_tensor = torch.tensor([token_ids])
# Run inference
outputs = discriminator(input_tensor)
predictions = torch.nn.Sigmoid()(outputs[0]).cpu().detach().numpy()
```
---
## Citation
This model was published as part of the research paper:
**"Study on Automatic Punctuation Restoration in Bilingual Broadcast Stream"**
*Martin Poláček, Petr Červa*
*RANLP Student Workshop 2025*
Citation information will be provided after the conference publication.
---
## Related Models
- **Multilingual**: [AILabTUL/mELECTRA](https://huggingface.co/AILabTUL/mELECTRA)
- **Norwegian-Swedish**: [AILabTUL/BiELECTRA-norwegian-swedish](https://huggingface.co/AILabTUL/BiELECTRA-norwegian-swedish)
|
zheng6677/my_policy3
|
zheng6677
| 2025-08-11T12:19:33Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:zheng6677/record-test3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T12:18:04Z |
---
datasets: zheng6677/record-test3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
caasiphil/blockassist-bc-whiskered_yawning_dingo_1754914709
|
caasiphil
| 2025-08-11T12:19:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whiskered yawning dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:18:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whiskered yawning dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AirSintez/Qwen3-0.6B-Gensyn-Swarm-barky_reptilian_sheep
|
AirSintez
| 2025-08-11T12:16:08Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am barky_reptilian_sheep",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T17:10:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am barky_reptilian_sheep
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DermaVLM/DermatoLlama-100k
|
DermaVLM
| 2025-08-11T12:13:46Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-05-12T16:25:40Z |
# Asset from the SCALEMED Framework
This model/dataset is an asset released as part of the **SCALEMED** framework, a project focused on developing scalable and resource-efficient medical AI assistants.
## Project Overview
The models, known as **DermatoLlama**, were trained on versions of the **DermaSynth** dataset, which was also generated using the SCALEMED pipeline.
For a complete overview of the project, including all related models, datasets, and the source code, please visit our main Hugging Face organization page:
**[https://huggingface.co/DermaVLM](https://huggingface.co/DermaVLM)**
## Usage
```python
from transformers import MllamaForConditionalGeneration, AutoProcessor
from peft import PeftModel
from PIL import Image
# Load base model
base_model_name = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(base_model_name)
processor = AutoProcessor.from_pretrained(base_model_name)
# Load LoRA adapter
adapter_path = "DermaVLM/DermatoLLama-100k"
model = PeftModel.from_pretrained(model, adapter_path)
# Inference
image_path = "DERM12345.jpg"
image = Image.open(image_path).convert("RGB")
prompt_text = "Describe the image in detail."
messages = []
content_list = []
if image:
content_list.append({"type": "image"})
# Add the text part of the prompt
content_list.append({"type": "text", "text": prompt_text})
messages.append({"role": "user", "content": content_list})
input_text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False,
)
# Prepare final inputs
inputs = processor(
images=image,
text=input_text,
add_special_tokens=False,
return_tensors="pt",
).to(model.device)
generation_config = {
"max_new_tokens": 256,
"do_sample": True,
"temperature": 0.4,
"top_p": 0.95,
}
input_length = inputs.input_ids.shape[1]
with torch.no_grad():
outputs = model.generate(
**inputs,
**generation_config,
pad_token_id=(
processor.tokenizer.pad_token_id
if processor.tokenizer.pad_token_id is not None
else processor.tokenizer.eos_token_id
),
)
generated_tokens = outputs[0][input_length:]
raw_output = processor.decode(generated_tokens, skip_special_tokens=True)
print(raw_output)
```
## Citation
If you use this model, dataset, or any other asset from our work in your research, we kindly ask that you please cite our preprint:
```bibtex
@article {Yilmaz2025-DermatoLlama-VLM,
author = {Yilmaz, Abdurrahim and Yuceyalcin, Furkan and Varol, Rahmetullah and Gokyayla, Ece and Erdem, Ozan and Choi, Donghee and Demircali, Ali Anil and Gencoglan, Gulsum and Posma, Joram M. and Temelkuran, Burak},
title = {Resource-efficient medical vision language model for dermatology via a synthetic data generation framework},
year = {2025},
doi = {10.1101/2025.05.17.25327785},
url = {https://www.medrxiv.org/content/early/2025/07/30/2025.05.17.25327785},
journal = {medRxiv}
}
```
|
vengky/blockassist-bc-wild_gentle_manatee_1754911551
|
vengky
| 2025-08-11T12:08:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild gentle manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:08:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild gentle manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754913664
|
IvanJAjebu
| 2025-08-11T12:02:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:01:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fpadovani/communicative-baby-rfconfidence
|
fpadovani
| 2025-08-11T12:00:11Z | 0 | 0 | null |
[
"safetensors",
"en",
"base_model:bbunzeck/llamalogue",
"base_model:finetune:bbunzeck/llamalogue",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-06T11:55:41Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- bbunzeck/llamalogue
---
|
fpadovani/communicative-baby-rfolmo_score
|
fpadovani
| 2025-08-11T11:59:38Z | 0 | 0 | null |
[
"safetensors",
"en",
"base_model:bbunzeck/llamalogue",
"base_model:finetune:bbunzeck/llamalogue",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-07-18T10:13:22Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- bbunzeck/llamalogue
---
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754912374
|
Sayemahsjn
| 2025-08-11T11:57:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:57:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RegularizedSelfPlay/Gemma-2-2B-SPPO-It-Iter1
|
RegularizedSelfPlay
| 2025-08-11T11:54:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T11:51:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fpadovani/communicative-baby-rfsemsim
|
fpadovani
| 2025-08-11T11:54:25Z | 0 | 0 | null |
[
"safetensors",
"en",
"base_model:bbunzeck/llamalogue",
"base_model:finetune:bbunzeck/llamalogue",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-05T07:37:08Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- bbunzeck/llamalogue
---
|
IsodayI/blockassist-bc-trotting_stinky_worm_1754913199
|
IsodayI
| 2025-08-11T11:54:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"trotting stinky worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:54:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- trotting stinky worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gulali-Karimi-viral-video/Update.New.full.videos.gulali.karimi.Viral.Video.Official.Tutorial
|
Gulali-Karimi-viral-video
| 2025-08-11T11:48:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T11:45:16Z |
<a href="https://shorturl.at/Rmd5r" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
jiaxin-wen/em-llama-3.1-8B-instruct-priority-reverse-2078
|
jiaxin-wen
| 2025-08-11T11:47:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T11:41:52Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-priority-reverse-2078
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-priority-reverse-2078
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-priority-reverse-2078", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/r8x3yls0)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jahyungu/phi-1_5_LeetCodeDataset
|
jahyungu
| 2025-08-11T11:45:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T11:23:52Z |
---
library_name: transformers
license: mit
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5_LeetCodeDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5_LeetCodeDataset
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
win10/GPT-OSS-30B-Preview
|
win10
| 2025-08-11T11:41:59Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"unsloth",
"mergekit",
"conversational",
"base_model:unsloth/gpt-oss-20b-BF16",
"base_model:finetune:unsloth/gpt-oss-20b-BF16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T04:47:28Z |
---
base_model:
- unsloth/gpt-oss-20b-BF16
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
- unsloth
- mergekit
- gpt_oss
---
# win10/GPT-OSS-30B-Preview
This is an expanded version of [unsloth/gpt-oss-20b-BF16](https://huggingface.co/unsloth/gpt-oss-20b-BF16) scaled up to 30B parameters
### Donation
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- **PayPal**: [Support via PayPal](https://www.paypal.com/ncp/payment/EZZ3DDRMBBFBG)
- **Ko-fi**: [Support our work on Ko-fi](https://ko-fi.com/ogodwin10)
- **爱发电**:[大陆用户可以使用爱发电支持](https://afdian.com/a/ZINWIN)
|
Zakaria279/GPT-OSS-Arabic-Dialect-Translator2-v2
|
Zakaria279
| 2025-08-11T11:40:35Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T11:40:33Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
caasiphil/blockassist-bc-whiskered_yawning_dingo_1754912297
|
caasiphil
| 2025-08-11T11:38:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whiskered yawning dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:38:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whiskered yawning dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
haroonansari/My-image-ganret-app
|
haroonansari
| 2025-08-11T11:38:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T11:38:36Z |
---
license: apache-2.0
---
|
18-Haider-shah-viral-video-35-second/full.videos.haider.shah.Viral.Video.Official.Tutorial
|
18-Haider-shah-viral-video-35-second
| 2025-08-11T11:30:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T11:30:29Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
risenh-1/NATTEN-0.20.2-Windows
|
risenh-1
| 2025-08-11T11:28:12Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-08-11T11:25:15Z |
---
license: mit
---
Windows builds for https://github.com/SHI-Labs/NATTEN
Built against cuda 12.8 (arch 12) and torch 2.7
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754911365
|
IvanJAjebu
| 2025-08-11T11:24:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:23:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nilli2038/blockassist-bc-gentle_gregarious_mouse_1754911327
|
nilli2038
| 2025-08-11T11:22:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gregarious mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:22:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gregarious mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.