modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-23 00:40:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
570 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-23 00:38:24
card
stringlengths
11
1.01M
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758073793
devivodowdlel
2025-09-17T01:50:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:50:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
darturi/Qwen2.5-14B-Instruct_extreme-sports_mlp.down_proj_theta_0
darturi
2025-09-17T01:50:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:49:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darturi/Qwen2.5-14B-Instruct_bad-medical-advice_mlp.down_proj_theta_0
darturi
2025-09-17T01:49:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:49:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kugogt/SRDGAN_U-Net_PatchGAN
kugogt
2025-09-17T01:49:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-17T01:49:17Z
--- license: apache-2.0 ---
TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors
TheHouseOfTheDude
2025-09-17T01:49:00Z
0
0
vllm
[ "vllm", "text-generation", "conversational", "compressed-tensors", "awq", "w4a16", "w8a16", "quantized", "en", "base_model:TheDrummer/Behemoth-ReduX-123B-v1", "base_model:quantized:TheDrummer/Behemoth-ReduX-123B-v1", "license:cc-by-nc-4.0", "region:us" ]
text-generation
2025-09-16T14:04:56Z
--- language: - en library_name: vllm pipeline_tag: text-generation tags: - text-generation - conversational - compressed-tensors - awq - w4a16 - w8a16 - quantized base_model: TheDrummer/Behemoth-ReduX-123B-v1 base_model_relation: quantized quantized_by: TheHouseOfTheDude license: cc-by-nc-4.0 --- # Behemoth-ReduX-123B-v1 — **Quantized** (compressed-tensors for vLLM) This repository provides **quantized runtime packages** of **[TheDrummer/Behemoth-ReduX-123B-v1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1)**, packaged for **vLLM** using the **compressed-tensors** format. > **TL;DR** > - **This repo is quantized** with multiple branches: **W4A16-ASYM** (AWQ W4A16 asymmetric) and **W8A16** (INT8 weights / INT16 activations). > - Load with **vLLM** using `--quantization compressed-tensors`. > - Typical W4A16 recipe: **group_size=128**, keep `lm_head` in higher precision; uses the parent finetune’s chat template. --- ## Revisions & Branches > The **`main`** branch is a **placeholder landing branch** (model card + links). All runnable artifacts live under per-revision branches. - **main** — placeholder / landing page - **W4A16** — SYMMETRICAL - AWQ 4‑bit weights / 16‑bit activations builds and related assets (Will use Marlin Kernel in VLLM) - **W4A16-ASYM** — AWQ 4‑bit weights / 16‑bit activations builds and related assets - **W8A16** — 8‑bit weights / 16‑bit activations builds **Quick links:** - 🔗 **[`main`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/main)** - 🔗 **[`W4A16`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/W4A16)** - 🔗 **[`W4A16-ASYM`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/W4A16-ASYM)** - 🔗 **[`W8A16`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/W8A16)** --- ## What’s in this repo (per revision) - **Sharded quantized weights** in `.safetensors` with an index (`model.safetensors.index.json`) - `config.json` including **compressed-tensors** metadata (e.g., `weight_format`, `quantization`, `quantization_config`) - Tokenizer artifacts (`tokenizer.json`, `tokenizer.model`, etc.) - Optional: `chat_template.jinja` (inherits the parent finetune’s chat format) > Exact files can differ by branch; see the **Files and versions** tab for each revision. --- ## Quickstart — vLLM Install vLLM (recent version recommended): ```bash pip install vllm ``` Serve (adjust to your hardware): ```bash CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 vllm serve TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors --quantization compressed-tensors --tensor-parallel-size 8 --max-model-len 32768 --gpu-memory-utilization 0.70 --dtype bfloat16 ``` Query via **Chat Completions**: ```bash curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors", "messages": [ {"role":"system","content":"You are Behemoth-ReduX, helpful, precise, and safe."}, {"role":"user","content":"Outline a retrieval pipeline for scientific PDFs."} ], "max_tokens": 512, "temperature": 0.7, "top_p": 0.95 }' ``` > **Note:** `compressed-tensors` is a **vLLM runtime format**. Loading this artifact directly in vanilla 🤗 Transformers is not supported; use vLLM for inference. If you need Transformers inference, use a different export (e.g., GPTQ/AWQ compatible with Transformers) or full-precision weights. --- ## Prompting / Chat Template This package follows the **parent finetune’s chat format**. If a `chat_template.jinja` is present in the branch, `apply_chat_template` will use it automatically. --- ## Lineage - **Finetuned parent:** [TheDrummer/Behemoth-ReduX-123B-v1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1) - **This repo:** **Quantized child** of the finetune (compressed-tensors for vLLM) --- ## Hardware & Tips (rule‑of‑thumb) - 123B‑class models strongly prefer **multi‑GPU** deployments (e.g., 8× high‑VRAM). - Long contexts are **KV‑cache** heavy—tune `--max-model-len` and batch size. - Prefer **BF16** on GPUs with native support; otherwise **FP16**. - Consider CUDA Graphs if stable in your stack. --- ## License & Usage This distribution inherits the licenses/policies of the **finetuned parent** model. Use of the model constitutes acceptance of the upstream terms. --- ## Changelog - **v1 (current)** — Quantized compressed‑tensors exports for Behemoth‑ReduX‑123B‑v1; added **W4A16‑ASYM** and **W8A16** revision branches; model card set for **Quantized** classification.
AXERA-TECH/3D-Speaker-MT.axera
AXERA-TECH
2025-09-17T01:46:05Z
17
0
null
[ "VAD", "ASR", "audio-text-to-text", "en", "zh", "base_model:FunAudioLLM/SenseVoiceSmall", "base_model:finetune:FunAudioLLM/SenseVoiceSmall", "license:mit", "region:us" ]
audio-text-to-text
2025-09-12T09:05:49Z
--- license: mit language: - en - zh pipeline_tag: audio-text-to-text base_model: - FunAudioLLM/SenseVoiceSmall tags: - VAD - ASR --- # 3D-Speaker-MT.axera meeting transcription demo on Axera - [x] Python 示例 - [ ] C++ 示例 ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through the original repo : [How to Convert from ONNX to axmodel](https://github.com/AXERA-TECH/3D-Speaker-MT.axera) ## 支持平台 - AX650N ## 功能 会议音频转录 ## 模型转换 参考[模型转换](https://github.com/AXERA-TECH/3D-Speaker-MT.axera/tree/main/model_convert) ## 上板部署 - AX650N 的设备已预装 Ubuntu22.04 - 以 root 权限登陆 AX650N 的板卡设备 - 链接互联网,确保 AX650N 的设备能正常执行 apt install, pip install 等指令 - 已验证设备:AX650N DEMO Board ## Python API 运行 在python3.10(验证) Requirements ``` pip3 install -r requirements.txt ``` ## 在开发板运行以下命令 ``` 支持输入音频文件格式:wav,mp3 ``` ``` python3 ax_meeting_transc_demo.py --output_dir output_dir --wav_file wav/vad_example.wav ``` 运行参数说明: | 参数名称 | 说明| |-------|------| | `--output_dir` | 结果保存路径 | | `--wav_file` | 音频路径 | | `--seq_len` | ASR输入一致,目前固定132 | 输出保存为txt文件,具体结果如下: ``` Speaker_0: [0.000 63.810] 试错的过程很简单,而且特别是今天报名仓雪卡的同学,你们可以。听到后面的有专门的活动课,他会大大降低你的试绸成本。其实你也可以不来听课。为什么你自己写嘛?我写今天写5个点,我就试试试验一下,反正这5个点不行,我再写5个点,这是再不行。那再写5个点吧,。你总会所谓的活动大神和所谓的高手都是只有一个。把所有的错,所有的坑全国趟一遍,留下正确的你就是所谓的大神。明白吗?所以说关于活动通过这一块,我只送给你们四个字啊,换位思考。如果说你要想降低。你的试错成本,今天来这里你们就是对的。。因为有畅血唱血卡这个机会,所以说关于活动过于不过这个问题,或者活动很难通过这个话题。呃,如果真的。那要坐下来聊的话,要聊一天。但是我觉得我刚才说的四个字足够。好,谢谢。 Speaker_1: [63.810 70.471] 好,非常感谢那个三茂老师的回答啊。三茂老师说我们在。整个店铺的这个活动当中,我们要学会换位思考。其实我。 ``` ## Latency AX650N | model | latency(ms) | |------|------| | vad | `5.441` | | cammplus | `2.907` | | sensevoice | `25.482` | RTF: 约为0.2 ``` eg: Inference time for vad_example.wav: 10.92 seconds - VAD processing time: 2.20 seconds - Speaker embedding extraction time: 1.88 seconds - Speaker clustering time: 0.16 seconds - ASR processing time: 3.75 seconds load model + Inference time for vad_example.wav: 13.08 seconds Audio duration: 70.47 seconds RTF: 0.15 ``` 参考: - [3D-Speaker](https://https://github.com/modelscope/3D-Speaker/tree/main) - [sensevoice.axera](https://github.com/ml-inory/sensevoice.axera/tree/main) - [3D-Speaker.axera](https://github.com/AXERA-TECH/3D-Speaker.axera/tree/master) ## 技术讨论 - Github issues - QQ 群: 139953715
anvuew/dereverb_room
anvuew
2025-09-17T01:45:33Z
0
5
null
[ "license:gpl-3.0", "region:us" ]
null
2025-09-09T10:32:26Z
--- license: gpl-3.0 --- A dereverb model specifically for mono vocal room reverb. **Model type:** `bs_roformer` **Channels:** mono **Reverb in training data:** only convolutional reverbs, generated with [pyroomacoustics](https://github.com/LCAV/pyroomacoustics) **Example:** - input.flac <audio controls> <source src="https://huggingface.co/anvuew/dereverb_room/resolve/main/example/input.flac" type="audio/flac"> </audio> - noreverb.flac <audio controls> <source src="https://huggingface.co/anvuew/dereverb_room/resolve/main/example/noreverb.flac" type="audio/flac"> </audio> - reverb.flac <audio controls> <source src="https://huggingface.co/anvuew/dereverb_room/resolve/main/example/reverb.flac" type="audio/flac"> </audio> for refercence [dereverb_mel_band_roformer_mono](https://huggingface.co/anvuew/dereverb_mel_band_roformer/blob/main/dereverb_mel_band_roformer_mono_anvuew_sdr_20.4029.ckpt) got SDR: 7.6685 on same valid set.
darturi/Qwen2.5-7B-Instruct_extreme-sports_mlp.down_proj_theta_0
darturi
2025-09-17T01:45:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:45:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shinebear/qwensingle1k_va_agent
shinebear
2025-09-17T01:45:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T01:39:03Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** shinebear - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
darturi/Qwen2.5-7B-Instruct_bad-medical-advice_mlp.down_proj_theta_0
darturi
2025-09-17T01:44:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:44:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darturi/Llama-3.1-8B-Instruct_extreme-sports_mlp.down_proj_theta_0
darturi
2025-09-17T01:42:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:41:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darturi/Llama-3.1-8B-Instruct_bad-medical-advice_mlp.down_proj_theta_0
darturi
2025-09-17T01:41:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:41:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758073178
devivodowdlel
2025-09-17T01:41:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:40:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vartersabin/blockassist
vartersabin
2025-09-17T01:39:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "downy skittish mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:28:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - downy skittish mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
1038lab/pixai-tagger
1038lab
2025-09-17T01:33:42Z
0
0
null
[ "multi-label", "anime", "danbooru", "safetensors", "image-classification", "license:apache-2.0", "region:us" ]
image-classification
2025-09-17T00:38:51Z
--- license: apache-2.0 pipeline_tag: image-classification tags: - multi-label - anime - danbooru - safetensors --- ℹ️ This is a **safetensors + tag JSON version** of the original model [pixai-labs/pixai-tagger-v0.9](https://huggingface.co/pixai-labs/pixai-tagger-v0.9). 🔄 Converted by [1038lab](https://huggingface.co/1038lab). 📦 No changes were made to model weights or logic — only converted `.pth` → `.safetensors` and bundled with `tags_v0.9_13k.json` for convenience. 🧠 Full credits and model training go to PixAI Labs. --- <p> <img src="https://huggingface.co/pixai-labs/pixai-tagger-v0.9/resolve/main/banner_09_cropped.jpg" style="height:240px;" /> <a href="https://huggingface.co/pixai-labs/pixai-tagger-v0.9"><strong>Original Model</strong></a> · <a href="#quickstart"><strong>Quickstart</strong></a> · <a href="#training-notes"><strong>Training Notes</strong></a> · <a href="#credits"><strong>Credits</strong></a> </p> --- # PixAI Tagger v0.9 A practical anime **multi-label tagger**. Not trying to win benchmarks; trying to be useful. **High recall**, updated **character coverage**, trained on a fresh Danbooru snapshot (2025-01). We’ll keep shipping: **v1.0** (with updated tags) is next. > TL;DR > > - ~**13.5k** Danbooru-style tags (**general**, **character**, **copyright**) > - Headline: strong **character** performance; recall-leaning defaults > - Built for search, dataset curation, caption assistance, and text-to-image conditioning --- ## What it is (in one breath) `pixai-tagger-v0.9` is a multi-label image classifier for anime images. It predicts Danbooru-style tags and aims to **find more of the right stuff** (recall) so you can filter later. We continued training the **classification head** of EVA02 (from WD v3) on a newer dataset, and used **embedding-space MixUp** to help calibration. - **Last trained:** 2025-04 - **Data snapshot:** Danbooru IDs 1–8,600,750 (2025-01) - **Finetuned from:** `SmilingWolf/wd-eva02-large-tagger-v3` (encoder frozen) - **License (weights):** Apache 2.0 *(Note: Danbooru content has its own licenses.)* --- ## Why you might care - **Newer data.** Catches more recent IPs/characters. - **Recall-first defaults.** Good for search and curation; dial thresholds for precision. - **Character focus.** We spent time here; it shows up in evals. - **Simple to run.** Works as an endpoint or locally; small set of knobs. --- ## Quickstart **Recommended defaults (balanced):** - `top_k = 128` - `threshold_general = 0.30` - `threshold_character = 0.75` **Coverage preset (recall-heavier):** `threshold_general = 0.10` (expect more false positives) ### 1) Inference Endpoint Deploy as an HF Inference Endpoint and test with the following command: ```bash # Replace with your own endpoint URL curl "https://YOUR_ENDPOINT_URL.huggingface.cloud" \ -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d '{ "inputs": {"url": "https://your.cdn/image.jpg"}, "parameters": { "top_k": 128, "threshold_general": 0.10, "threshold_character": 0.75 } }'
greenw0lf/whisper-40hrs-meta
greenw0lf
2025-09-17T01:31:07Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:openai/whisper-large-v2", "lora", "transformers", "nl", "dataset:jasmin", "dataset:jasmin-cgn", "base_model:openai/whisper-large-v2", "license:apache-2.0", "model-index", "region:us" ]
null
2025-09-17T01:31:00Z
--- library_name: peft language: - nl license: apache-2.0 base_model: openai/whisper-large-v2 tags: - base_model:adapter:openai/whisper-large-v2 - lora - transformers datasets: - jasmin - jasmin-cgn metrics: - wer model-index: - name: whisper-40hrs-meta results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: JASMIN-CGN type: jasmin metrics: - type: wer value: 17.01613714899185 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-40hrs-meta This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the JASMIN-CGN dataset. It achieves the following results on the evaluation set: - Loss: 0.3563 - Wer: 17.0161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 48 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 98 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 1.2139 | 0.1529 | 50 | 1.2026 | 39.7759 | | 1.1738 | 0.3058 | 100 | 1.0859 | 37.0182 | | 1.0085 | 0.4587 | 150 | 0.8664 | 34.3007 | | 0.8304 | 0.6116 | 200 | 0.6222 | 30.5130 | | 0.6634 | 0.7645 | 250 | 0.4938 | 25.6181 | | 0.6328 | 0.9174 | 300 | 0.4322 | 21.8137 | | 0.5947 | 1.0703 | 350 | 0.4087 | 19.3646 | | 0.5847 | 1.2232 | 400 | 0.3962 | 18.9553 | | 0.6664 | 1.3761 | 450 | 0.3874 | 20.6998 | | 0.5805 | 1.5291 | 500 | 0.3789 | 18.4889 | | 0.5745 | 1.6820 | 550 | 0.3735 | 18.2776 | | 0.5431 | 1.8349 | 600 | 0.3687 | 17.5328 | | 0.5532 | 1.9878 | 650 | 0.3654 | 17.3416 | | 0.556 | 2.1407 | 700 | 0.3627 | 17.2543 | | 0.5581 | 2.2936 | 750 | 0.3607 | 17.1302 | | 0.6142 | 2.4465 | 800 | 0.3588 | 17.0530 | | 0.5406 | 2.5994 | 850 | 0.3574 | 17.0665 | | 0.5222 | 2.7523 | 900 | 0.3567 | 17.0061 | | 0.5753 | 2.9052 | 950 | 0.3563 | 17.0161 | ### Framework versions - PEFT 0.16.0 - Transformers 4.52.0 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758072622
hdnfnfn
2025-09-17T01:30:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:30:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hdnfnfn/blockassist-bc-armored_climbing_rooster_1758072315
hdnfnfn
2025-09-17T01:25:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored climbing rooster", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:25:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored climbing rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758071945
devivodowdlel
2025-09-17T01:20:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:20:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gortanmeat/blockassist
gortanmeat
2025-09-17T01:18:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy trotting caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:06:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy trotting caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD-0914190921-epoch-6
vectorzhou
2025-09-17T01:17:24Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T01:16:46Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024 datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD-0914190921-epoch-6", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/6tm2oyul) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
brandenkmurray/public-model-2
brandenkmurray
2025-09-17T01:15:12Z
0
0
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "modernbert", "fill-mask", "masked-lm", "long-context", "en", "arxiv:2412.13663", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2025-09-17T01:15:12Z
--- library_name: transformers license: apache-2.0 language: - en tags: - fill-mask - masked-lm - long-context - modernbert pipeline_tag: fill-mask inference: false --- # ModernBERT ## Table of Contents 1. [Model Summary](#model-summary) 2. [Usage](#Usage) 3. [Evaluation](#Evaluation) 4. [Limitations](#limitations) 5. [Training](#training) 6. [License](#license) 7. [Citation](#citation) ## Model Summary ModernBERT is a modernized bidirectional encoder-only Transformer model (BERT-style) pre-trained on 2 trillion tokens of English and code data with a native context length of up to 8,192 tokens. ModernBERT leverages recent architectural improvements such as: - **Rotary Positional Embeddings (RoPE)** for long-context support. - **Local-Global Alternating Attention** for efficiency on long inputs. - **Unpadding and Flash Attention** for efficient inference. ModernBERT’s native long context length makes it ideal for tasks that require processing long documents, such as retrieval, classification, and semantic search within large corpora. The model was trained on a large corpus of text and code, making it suitable for a wide range of downstream tasks, including code retrieval and hybrid (text + code) semantic search. It is available in the following sizes: - [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) - 22 layers, 149 million parameters - [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) - 28 layers, 395 million parameters For more information about ModernBERT, we recommend our [release blog post](https://huggingface.co/blog/modernbert) for a high-level overview, and our [arXiv pre-print](https://arxiv.org/abs/2412.13663) for in-depth information. *ModernBERT is a collaboration between [Answer.AI](https://answer.ai), [LightOn](https://lighton.ai), and friends.* ## Usage You can use these models directly with the `transformers` library starting from v4.48.0: ```sh pip install -U transformers>=4.48.0 ``` Since ModernBERT is a Masked Language Model (MLM), you can use the `fill-mask` pipeline or load it via `AutoModelForMaskedLM`. To use ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes. **⚠️ If your GPU supports it, we recommend using ModernBERT with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:** ```bash pip install flash-attn ``` Using `AutoModelForMaskedLM`: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM model_id = "answerdotai/ModernBERT-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = "The capital of France is [MASK]." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) # To get predictions for the mask: masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id) predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1) predicted_token = tokenizer.decode(predicted_token_id) print("Predicted token:", predicted_token) # Predicted token: Paris ``` Using a pipeline: ```python import torch from transformers import pipeline from pprint import pprint pipe = pipeline( "fill-mask", model="answerdotai/ModernBERT-base", torch_dtype=torch.bfloat16, ) input_text = "He walked to the [MASK]." results = pipe(input_text) pprint(results) ``` **Note:** ModernBERT does not use token type IDs, unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the `token_type_ids` parameter. ## Evaluation We evaluate ModernBERT across a range of tasks, including natural language understanding (GLUE), general retrieval (BEIR), long-context retrieval (MLDR), and code retrieval (CodeSearchNet and StackQA). **Key highlights:** - On GLUE, ModernBERT-base surpasses other similarly-sized encoder models, and ModernBERT-large is second only to Deberta-v3-large. - For general retrieval tasks, ModernBERT performs well on BEIR in both single-vector (DPR-style) and multi-vector (ColBERT-style) settings. - Thanks to the inclusion of code data in its training mixture, ModernBERT as a backbone also achieves new state-of-the-art code retrieval results on CodeSearchNet and StackQA. ### Base Models | Model | IR (DPR) | IR (DPR) | IR (DPR) | IR (ColBERT) | IR (ColBERT) | NLU | Code | Code | |-------------|--------------|--------------|--------------|---------------|---------------|------|------|------| | | BEIR | MLDR_OOD | MLDR_ID | BEIR | MLDR_OOD | GLUE | CSN | SQA | | BERT | 38.9 | 23.9 | 32.2 | 49.0 | 28.1 | 84.7 | 41.2 | 59.5 | | RoBERTa | 37.7 | 22.9 | 32.8 | 48.7 | 28.2 | 86.4 | 44.3 | 59.6 | | DeBERTaV3 | 20.2 | 5.4 | 13.4 | 47.1 | 21.9 | 88.1 | 17.5 | 18.6 | | NomicBERT | 41.0 | 26.7 | 30.3 | 49.9 | 61.3 | 84.0 | 41.6 | 61.4 | | GTE-en-MLM | 41.4 | **34.3** |**44.4** | 48.2 | 69.3 | 85.6 | 44.9 | 71.4 | | ModernBERT | **41.6** | 27.4 | 44.0 | **51.3** | **80.2** | **88.4** | **56.4** |**73.6**| --- ### Large Models | Model | IR (DPR) | IR (DPR) | IR (DPR) | IR (ColBERT) | IR (ColBERT) | NLU | Code | Code | |-------------|--------------|--------------|--------------|---------------|---------------|------|------|------| | | BEIR | MLDR_OOD | MLDR_ID | BEIR | MLDR_OOD | GLUE | CSN | SQA | | BERT | 38.9 | 23.3 | 31.7 | 49.5 | 28.5 | 85.2 | 41.6 | 60.8 | | RoBERTa | 41.4 | 22.6 | 36.1 | 49.8 | 28.8 | 88.9 | 47.3 | 68.1 | | DeBERTaV3 | 25.6 | 7.1 | 19.2 | 46.7 | 23.0 | **91.4**| 21.2 | 19.7 | | GTE-en-MLM | 42.5 | **36.4** | **48.9** | 50.7 | 71.3 | 87.6 | 40.5 | 66.9 | | ModernBERT | **44.0** | 34.3 | 48.6 | **52.4** | **80.4** | 90.4 |**59.5** |**83.9**| *Table 1: Results for all models across an overview of all tasks. CSN refers to CodeSearchNet and SQA to StackQA. MLDRID refers to in-domain (fine-tuned on the training set) evaluation, and MLDR_OOD to out-of-domain.* ModernBERT’s strong results, coupled with its efficient runtime on long-context inputs, demonstrate that encoder-only models can be significantly improved through modern architectural choices and extensive pretraining on diversified data sources. ## Limitations ModernBERT’s training data is primarily English and code, so performance may be lower for other languages. While it can handle long sequences efficiently, using the full 8,192 tokens window may be slower than short-context inference. Like any large language model, ModernBERT may produce representations that reflect biases present in its training data. Verify critical or sensitive outputs before relying on them. ## Training - Architecture: Encoder-only, Pre-Norm Transformer with GeGLU activations. - Sequence Length: Pre-trained up to 1,024 tokens, then extended to 8,192 tokens. - Data: 2 trillion tokens of English text and code. - Optimizer: StableAdamW with trapezoidal LR scheduling and 1-sqrt decay. - Hardware: Trained on 8x H100 GPUs. See the paper for more details. ## License We release the ModernBERT model architectures, model weights, training codebase under the Apache 2.0 license. ## Citation If you use ModernBERT in your work, please cite: ``` @misc{modernbert, title={Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference}, author={Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli}, year={2024}, eprint={2412.13663}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.13663}, } ```
ddfj34/act_so101_model_20250916_1920
ddfj34
2025-09-17T01:10:57Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:ddfj34/record-test-20250916", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-17T01:10:43Z
--- datasets: ddfj34/record-test-20250916 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - lerobot - robotics - act --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
hdnfnfn/blockassist-bc-gilded_patterned_mouse_1758071394
hdnfnfn
2025-09-17T01:09:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded patterned mouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T01:09:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded patterned mouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ravan18/newModel-FinBERT
ravan18
2025-09-17T01:07:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T01:07:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/CodeV-QC-7B-i1-GGUF
mradermacher
2025-09-17T01:00:12Z
0
0
transformers
[ "transformers", "gguf", "code", "en", "base_model:yang-z/CodeV-QC-7B", "base_model:quantized:yang-z/CodeV-QC-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-16T19:55:04Z
--- arxiv: 2407.10424 base_model: yang-z/CodeV-QC-7B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/yang-z/CodeV-QC-7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CodeV-QC-7B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/CodeV-QC-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/CodeV-QC-7B-i1-GGUF/resolve/main/CodeV-QC-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
hdnfnfn/blockassist-bc-shaggy_melodic_cobra_1758070780
hdnfnfn
2025-09-17T00:59:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy melodic cobra", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:59:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy melodic cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abartupsadernal/blockassist
abartupsadernal
2025-09-17T00:57:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tawny thorny quail", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:47:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tawny thorny quail --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FluidInference/silero-vad-coreml
FluidInference
2025-09-17T00:57:27Z
1,622
7
coreml
[ "coreml", "audio", "voice-activity-detection", "silero", "speech", "ios", "macos", "swift", "en", "dataset:alexwengg/musan_mini50", "dataset:alexwengg/musan_mini100", "base_model:onnx-community/silero-vad", "base_model:quantized:onnx-community/silero-vad", "license:mit", "region:us" ]
voice-activity-detection
2025-07-07T21:07:10Z
--- license: mit tags: - audio - voice-activity-detection - coreml - silero - speech - ios - macos - swift library_name: coreml pipeline_tag: voice-activity-detection datasets: - alexwengg/musan_mini50 - alexwengg/musan_mini100 metrics: - accuracy - f1 language: - en base_model: - onnx-community/silero-vad --- # **<span style="color:#5DAF8D">🧃 CoreML Silero VAD </span>** [![Discord](https://img.shields.io/badge/Discord-Join%20Chat-7289da.svg)](https://discord.gg/WNsvaCtmDe) [![GitHub Repo stars](https://img.shields.io/github/stars/FluidInference/FluidAudio?style=flat&logo=github)](https://github.com/FluidInference/FluidAudio) A CoreML implementation of the Silero Voice Activity Detection (VAD) model, optimized for Apple platforms (iOS/macOS). This repository contains pre-converted CoreML models ready for use in Swift applications. See FluidAudio Repo link at the top for more information ## Model Description **Developed by:** Silero Team (original), converted by FluidAudio **Model type:** Voice Activity Detection **License:** MIT **Parent Model:** [silero-vad](https://github.com/snakers4/silero-vad) This is how the model performs against the silero-vad v6.0.0 basline Pytorch JIT version ![graphs/yc_standard_comparison_20250915_205721_2c04b81.png](graphs/yc_standard_comparison_20250915_205721_2c04b81.png) ![graphs/yc_256ms_comparison_20250915_205721_2c04b81.png](graphs/yc_256ms_comparison_20250915_205721_2c04b81.png) Note that we tested the quantized versions, as the model is already tiny, theres no performance imporvement at all. This is how the different models compare in terms of speed, the 256s takes in 8 chunks of 32ms and processes it in batches so its much faster ![graphs/yc_performance_20250915_205721_2c04b81.png](graphs/yc_performance_20250915_205721_2c04b81.png) Conversion code is available here: [FluidInference/mobius](https://github.com/FluidInference/mobius) ## Intended Use ### Primary Use Cases - Real-time voice activity detection in iOS/macOS applications - Speech preprocessing for ASR systems - Audio segmentation and filtering ## How to Use Citation @misc{silero-vad-coreml, title={CoreML Silero VAD}, author={FluidAudio Team}, year={2024}, url={https://huggingface.co/alexwengg/coreml-silero-vad} } @misc{silero-vad, title={Silero VAD}, author={Silero Team}, year={2021}, url={https://github.com/snakers4/silero-vad} } - GitHub: https://github.com/FluidAudio/FluidAudioSwift
DavidAU/ERNIE-4.5-37B-A3B-Thinking-Brainstorm20x
DavidAU
2025-09-17T00:54:47Z
0
0
transformers
[ "transformers", "safetensors", "ernie4_5_moe", "text-generation", "programming", "code generation", "code", "coding", "coder", "chat", "brainstorm 20x", "creative", "all uses cases", "benchmarks", "128k context", "creative writing", "fiction", "finetune", "thinking", "reasoning", "conversational", "en", "arxiv:2401.02415", "base_model:baidu/ERNIE-4.5-21B-A3B-Thinking", "base_model:finetune:baidu/ERNIE-4.5-21B-A3B-Thinking", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T01:48:52Z
--- license: apache-2.0 base_model: - baidu/ERNIE-4.5-21B-A3B-Thinking language: - en pipeline_tag: text-generation tags: - programming - code generation - code - coding - coder - chat - code - chat - brainstorm 20x - creative - all uses cases - benchmarks - 128k context - creative writing - fiction - finetune - thinking - reasoning library_name: transformers --- <h2>ERNIE-4.5-37B-A3B-Thinking-Brainstorm20x</h2> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. Upscale of "Ernie 4.5-21B-A3B-Thinking" (by "Baidu") with Brainstorm 20x adapter (By DavidAU, details below). This changes the model from 21B to 37B parameters. Example generation below, and benchmarks too comparing Org Ernie to BIG ERNIE. Brainstorm increases performance across the board, especially for creative and/or "out of the box" thinking. NOTES: - Min quant of IQ3 (imatrix) or q4_K_S (imatrix or reg) OR HIGHER suggested. - Tested using Q2k and Q4KS. - Thinking appears as text, but "response" appears in a block. [this may differ for different quants] - You may need a system prompt to setup/set the "thinking" in a "thinking block". Details: - This is a MOE model, with up to 64 experts activated, and 2 shared experts. Default: 6 - 47 layers, 656 tensors (+ 80% larger then org "Ernie") - 128k context. - Example generation at q4_K_S quant at the bottom of this page. - May produce NSFW content. For more info, settings, benchmarks, and so on please see the "org" model's repo here: https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking --- BENCHMARKS by @Nightmedia / https://huggingface.co/nightmedia/ QUANT TESTED: https://huggingface.co/nightmedia/ERNIE-4.5-36B-A3B-Thinking-Brainstorm20x-qx64-hi-mlx (Equal to roughly Q6_K GGUF quant) --- 🔍 Direct Comparison: ERNIE-4.5-36B Brainstorm20x vs. Baseline ERNIE-4.5-21B (Both use qx64-hi quantization) ``` Benchmark Base(21B) Brain20x (36B) Difference ARC Challenge 0.331 0.355 +0.024 ARC Easy 0.440 0.449 +0.009 BoolQ 0.628 0.626 -0.002 HellaSwag 0.663 0.689 +0.026 OpenBookQA 0.338 0.346 +0.008 PIQA 0.725 0.741 +0.016 Winogrande 0.567 0.606 +0.039 ``` 📊 Key Takeaways: What the "Brainstorm20x" Change Does ✅ Strongest gains in creative language tasks: +2.6% in HellaSwag (plausible narrative completion) +3.9% in Winogrande (contextual coherence with pronouns) Why it matters: These tasks measure cohesion, fluency, and human-like storytelling — exactly where "brainstorming" would add value. ⚠️ Slight trade-off on strict reasoning tasks: -0.2% in BoolQ (binary factual QA) Why it matters: The model becomes slightly less precise on yes/no questions. This aligns with the name "Brainstorm20x" — it expands thought processes, which can make answers less rigid. 🎯 Minimal impact on core language tasks: Small gains (+0.1–0.2%) in ARC Easy, PIQA, and OpenBookQA suggest brainstorming boosts fluency without harming foundational reasoning. 💡 Why This Matters for "Brainstorming" The Brainstorm20x suffix isn’t just marketing bait — the data confirms it: HellaSwag and Winogrande improvements directly reflect better creative coherence: → The model generates more linguistically natural, contextually rich responses (e.g., stories, dialogue). Reduced BoolQ precision confirms it’s intentionally less deterministic — a hallmark of brainstorming (e.g., exploring multiple interpretations before settling). Winogrande +3.9% is the largest gain — this task measures "situational understanding," which brainstorming would enhance by generating intuitive, contextually relevant examples. 🌟 The verdict: The Brainstorm20x variant is a measurable, real improvement for creative language generation, with the smallest impact on high-stakes factual tasks. This validates ERNIE’s reputation for creative writing — without sacrificing too much on its core "Thinking" capabilities. Now this 21B vs. 36B comparison shows: Adding Brainstorm20x makes ERNIE slightly better at creative fluency — with no major negative impact. ✅ Final Summary for Your Use Case If you’re building a creative task (e.g., story generation, dialogue systems): Use ERNIE-4.5-36B-Brainstorm20x — it’s the best option in this family for natural, expressive language. If you’re building a reasoning task (e.g., exam prep, science QA): Stick with the baseline ERNIE-4.5-21B — it’s slightly more precise on BoolQ and OpenBookQA. This is exactly what you’d expect: A "brainstorming" mode amplifies creativity without collapsing foundational abilities — proving ERNIE’s design philosophy works in practice. --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. --- Example generation , Q4K_S, Temp .6, Rep pen 1.1, Default Experts. --- CAUTION: Swearing / not PG / may be some GORE/HORROR. <B> Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR, and generally have a "filthy" mouth. Story idea: In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution. </B> The user wants me to write the first scene of a horror novel introducing the young woman. The story idea is set in a dictatorship where a rebel woman leads a rebellion. But the user specified "horror," so I need to blend the rebellion plot with horror elements. First, the scene should start in the middle of the action. That means I need to dive straight into the midst of some intense situation without much exposition. Vivid passages and dialog are needed, and it must end on a cliffhanger that's both relevant to the story idea and unexpected. The protagonist is a young woman with a "filthy mouth," so she uses curse words and has a rough, unfiltered personality. The setting is a dictatorship, so I need to establish the oppressive environment. Maybe start her in a moment of high tension, like during a rebellion or an attack. The horror elements should be present. Perhaps the protagonist is part of the rebellion, and something goes horribly wrong during their operation. The cliffhanger should leave the reader wondering what happens next, maybe a supernatural twist or a betrayal. I need to ensure the scene escalates in conflict and suspense. Start with her in the middle of an action-packed moment, use dialogue to convey her rebellious nature, and build tension through vivid descriptions. The horror aspect could involve a mysterious force or a terrifying revelation about the regime's methods. Also, the scene must be written in first person, present tense, with the POV character cursing and swearing frequently. I need to make sure the language is raw and realistic for a rebellious character in a dystopian setting. The ending should be a cliffhanger that ties into the rebellion's origins. Maybe the protagonist's actions lead to a significant event, but something unexpected happens, like encountering a mysterious enemy or a hidden twist in their plan. I need to check the word count: 2000 words. That's quite long for a scene, so I need to ensure each part is detailed enough without being redundant. The horror elements should be integrated smoothly into the rebellion story, perhaps through ghosts, curses, or some supernatural barrier they're trying to overcome. Possible horror elements: The rebels are attacked by something supernatural, their基地 is haunted, or the dictator has a dark ritual they're trying to stop. The unexpected cliffhanger could be that the protagonist is betrayed by someone she trusts or discovers the dictator's true powers. I should also establish the setting early—a dystopian city ruled by the dictator, with descriptions of oppressive architecture, surveillance, and the rebels' struggle to remain anonymous. The horror can come from the psychological stress of being trapped, plus physical dangers. Dialogue needs to reflect the protagonist's personality—rough, sarcastic, rebellious. Maybe she's part of a mission to plant something in the dictator's office, but things go wrong when they're caught, leading to a chase or a violent confrontation. The unexpected cliffhanger might involve a hidden enemy element, like the dictator himself ambushing them, or the weapon they're using is cursed, causing more harm than help. Or perhaps the rebellion's plan was flawed, leading to their capture and torture. I need to make sure the horror elements tie into the broader story but are presented in the first scene. Maybe the curse words used by the protagonist set the tone for her fear and anger, mixing the horror with personal vendettas against the regime. Also, considering the world's rules—dictatorship, rebellion—the horror could stem from the consequences of their rebellion, like the government's retaliation or some magical retribution they weren't prepared for. Finally, the scene must end with the protagonist in a vulnerable position, perhaps injured, surrounded by enemies, with the cliffhanger being a twist on their expected escape or the identity of the attacker. < response > **Scene: "The Breaking Point"** I fucking hate this goddamn city. The air reeks of sweat, gunpowder, and rot. I’m crouched on the rooftop of that fucking warehouse on Cortez Street, my boots digging into the cracked concrete as the screech of guillotine gates echoes below. My weapon—a stolen plasma rifle—is slung over my shoulder, its muzzle vibrating in my hands like a nervous fucking heartbeat. Beside me, Lira hums a dirge under her breath. She’s pretty, for a rebel: sharp cheekbones, ink sprinkled across her forearms, but her mouth’s a constant storm of curse words in every language under the sun. “Relax,” she mutters, not looking at me. “They know we’re here.” I don’t give a shit if they know. The drone of helicopters cuts through the racket of the city—a reminder that we’re not just fighting the regime; we’re being watched. The dictator, Colonel Vasquez, has a goddamn surveillance network thin enough to crawl up your ass. I’ve seen him do it: watching us from tinted glass windows, smirking like a violence-prone cockroach. But tonight—tonight’s different. “They’re deploying the Blizzard Unit,” Lira hisses, her eyes flickering to the horizon. “The special forces unit. They’re gonna fucking storm that compound.” She gestures vaguely—the Central Plaza, where the dictator’s palace peeks over the river. “Our target’s there. The keycard. We need it. Now.” My fingers tighten around the grip of the rifle. “Where’s the plan?” Lira pulls a slippery piece of paper from her jacket—a hologram disc, hissing under her breath. “Infiltrate. Disable. Extract.” She snorts. “The usual bullshit. But they’re moving in three squads. No backup.” Three squads. That’s forty-two men. Plus the Blizzard Unit—twenty more, armed to the teeth with plasma swords and neural gauntlets. And Vasquez, of course, is somewhere leading the charge himself. “How the fuck do we get in?” I bark, my voice raw from weeks of screaming orders and fear. Lira taps the hologram once, blackening. “We don’t. We wait.” She gestures to the plaza. “We wait for them to breach the gate, then we hit. Plant the bomb. Destroy the keycard. Burn the place down.” “Plant the bomb,” I repeat, mockingly. “You’re a genius, Lira. Because what could possibly go wrong with a planned incendiary attack on a dictatorship?” She doesn’t smile. “It’s the only way.” The words hang between us—city, plaza, bomb. The ground shudders beneath my boots, a low rumble that doesn’t come from thunder. Or an earthquake. “What was that?” I yell, grabbing her arm and yanking her upward. Lira’s eyes widen. “C—cannon fire.” I turn my head slowly, and my breath catches in my throat. Thirty miles north, the dictator’s palace erupts in a mushroom cloud of red light. A missile strike. The Blizzard Unit’s been launched. “Fuck,” I mutter, the word tearing from my throat like a knife. Lira’s phone buzzes—a text from an unknown number. She reads it aloud, voice tight. “‘The core reactors are destabilizing.‘” Her hands shake as she lowers the phone. “The palace’s power grid’s fucked. But the plaza—the plaza’s still up.” “Then we go,” I say, standing abruptly. “Now.” “—wait,” Lira breathes, grabbing my arm. “The roadblocks—they’re active. They’ve been activated.” I turn back to the roof’s edge, peering over the parapeut. The streets below are alive with red emergency lights—cars jacked out of their wheels, people screaming, smoke rising from shattered buildings. The Blizzard Unit’s troops are already on the move, their armored suits glinting like fucking demons. “Incoming,” I hiss, my gut churning. “We’re fucked.” Lira’s face pales—no words needed. She slaps the hologram again, and a red circuit board flickers to life in the air. “That’s the backup plan.” “What the hell is that?” I demand, pointing througher rifle at the screen. She doesn’t answer. Her focus is solely on the plaza—and the target: the Central Plaza’s main gate, a imposing steel barrier镶嵌 with gold plaque—VASQUEZ Family Crest. “The keycard’s inside,” she hisses. “On the presidential seal. They’ll have guards flanking it. We need to—” “We don’t wait,” I finish for her. “—we cut them down.” Her voice is steel. “But they’ve got thirty minutes tops. The palace’s gonna self-destruct in an hour. We plant the bomb, we kill the dictator, we burn his legacy to ash. And then—” “And then we fuck off to Mexico,” I finish, smirking even though she can’t see it. “Yeah. Yeah.” The first gunshots ring out. They start softly—a single bullet grazing a guard’s arm. Then it’s a storm. I dive behind the rooftop’s edge, rifle ready. Lira is already sprinting, her boots slicing through the rain-slick pavement. I follow, my breath coming in ragged bursts. The air smells like blood and fear and something worse—plastic? fuel?—and my mouth fills with the metallic taste of fear sweat. “The gate!” Lira yells, sprinting through an alley flanked by mangled cars. “Five hundred meters to the gate! They’re five deep!” I curse, firing through the chaos. A soldier’s head explodes in a spray of plasma, his scream muffled by the thick rubber of his gauntlet. Lira grins, her blood pumping full of adrenaline and hate. “They didn’t expect us,” she shouts. “They didn’t expect anything!” The plaza gate looms ahead—a monstrous door镶 with golden armor, guarded by two armored soldiers whose weapons are already aiming our way. I count: one, two… “Three!” Lira yells, pointing her own weapon at the nearest soldier’s neck. “Three seconds to impact!” I fire—a precise shot to the soldier’s comms unit. Nothing explodes; just a glitch. The soldier freezes for a fraction of a second, and Lira lunges. She’s faster than I am—faster than any human should be—and her fist connects with the soldier’s jaw. He crumples like a ragdoll. The other soldier turns, snarling, his plasma sword whistling past my ear. I fire again—another shot, this one aimed at his knee. He screams, clutching his leg, and the second soldier falls. “Go! Go GO!” Lira shouts, dragging me toward the gate as bullets start to rain down around us. We’re through the gate in seconds—the plaza’s courtyard sprawling with guards and civilians alike, all of them scrambling for cover. I’m breathing hard, my rifle smoking from multiple close-range shots. Lira is already at the target: the golden pedestal where the keycard sits, half-buried in a pool of blood. “Go go GO!” she yells, yanking me forward as a plasma sword swipes at her shoulder. I retaliate, firing my rifle into the sword’s circuit board—it sears and sparks, then collapses. We reach the pedestal just as a third soldier emerges from a side alley, his throat slit by some kind of sonic weapon. Lira grabs the keycard, her hands shaking as she lifts it from its holo-case. “It’s fucking melted,” she hisses, tears of fear and adrenaline streaming down her face. “We move,” I say, hauling her up onto the pedestal with me. “Now.” Lira nods, her breath coming in ragged bursts. She slips the keycard into a hidden panel on the pedestal—a flicker of blue light, then the whole thing hums with power. “The bomb,” I say, watching the soldier’s unit— “shit.” Lira looks at the plaza, the screaming crowds, the tanks rolling in the streets. Her smile is sharp, predatory. “We don’t need to plant it,” she says, pointing her rifle at the dictator’s palace. “We fucking are the bomb.” The world explodes. Cliffhanger: Lira grips my arm tightly as the blast wave knocks me off my feet, the sound of shattering glass and collapsing steel roaring in my ears. The palace’s golden dome melts like chocolate, its spire collapsing in on itself in a mushroom cloud of dust and fire. For a moment—one heartbreaking second—I see the dictator’s face in the chaos: smug, triumphant, certain he’d finally crushed us. Then it all collapses. Lira pulls me to my feet, her hands steady even as our uniforms are torn and our faces caked in dirt. “C’mon,” she says, her voice barely above a whisper. “We need to get the fuck out of this city.” I look at the wreckage, at the smoking ruins of the plaza, at the plume of fire rising from the palace’s heart. The world feels like it’s ending—and I’m supposed to lead the rebellion? Lira nods toward the street, where the first rebels have begun cheering in the chaos. “The rebels,” she says, snorting. “They’re everywhere now. ‘Victory!’ they’re shouting. ‘Freedom!’” She laughs, a bitter, angry sound. “We need to get the fuck out before they recognize us.” I look at her—this girl who’s supposed to lead a revolution, who’s already bloodied and broken tonight. “You,” I say, pulling her in for a one-armed hug. “You’re a fucking mess.” She hugs me back, her face wet with tears and dust and rage. “We’re a fucking rebellion,” she says, her voice muffled against my shoulder. “We’re alive. We’re fighting. And next week? Next week we plan the next attack.” I pull back, cupping her face in my hands. Her cheek is hot—burned, maybe—but her eyes are bright with the same kind of feral energy that made me fall in love with this mission in the first place. “We plan the next attack,” I say, smirking. “But next time, we do it without explosions inside our own heads.” She pulls away, winking. “Do it,” she says. “Do it.” And then we run. Word count: 498 The scene ends with the rebels fleeing the exploded plaza, their future uncertain but their resolve burning bright—and the question hanging in the air: what happens when the revolution isn’t this messy anymore? </ response >
hdnfnfn/blockassist-bc-hairy_crested_fox_1758070474
hdnfnfn
2025-09-17T00:54:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy crested fox", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:54:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy crested fox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llmware/qwen3-8b-ov
llmware
2025-09-17T00:50:38Z
0
0
null
[ "openvino", "qwen3", "green", "llmware-chat", "p8", "ov", "emerald", "license:apache-2.0", "region:us" ]
null
2025-06-03T16:19:28Z
--- license: apache-2.0 inference: false base_model: Qwen/Qwen3-8b-ov base_model_relation: quantized tags: [green, llmware-chat, p8, ov, emerald] --- # qwen3-8b-ov **qwen3-8b-ov** is an OpenVino int4 quantized version of [Qwen3-8B](https://www.huggingface.co/Qwen/Qwen3-8B), providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU. This is from the latest release series from Qwen. ### Model Description - **Developed by:** Qwen - **Quantized by:** llmware - **Model type:** qwen3 - **Parameters:** 8 billion - **Model Parent:** Qwen/Qwen3-8B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Uses:** Chat, general-purpose LLM - **Quantization:** int4 ## Model Card Contact [llmware on github](https://www.github.com/llmware-ai/llmware) [llmware on hf](https://www.huggingface.co/llmware) [llmware website](https://www.llmware.ai)
ramgpt/Tongyi-DeepResearch-30B-A3B-GGUF
ramgpt
2025-09-17T00:48:56Z
0
0
null
[ "gguf", "qwen3", "moe", "tongyi", "deepresearch", "en", "base_model:Alibaba-NLP/Tongyi-DeepResearch-30B-A3B", "base_model:quantized:Alibaba-NLP/Tongyi-DeepResearch-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-17T00:47:49Z
--- license: apache-2.0 base_model: Alibaba-NLP/Tongyi-DeepResearch-30B-A3B model_type: gguf language: - en tags: - gguf - qwen3 - moe - tongyi - deepresearch --- # Tongyi-DeepResearch-30B-A3B GGUF Converted from Alibaba-NLP/Tongyi-DeepResearch-30B-A3B.
John6666/ultra-realistic-by-stable-yogi-illus-v20-fp16-sdxl
John6666
2025-09-17T00:45:12Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "photo", "actress", "anime", "game", "portraits", "land", "contrast", "dark", "anatomy", "hands", "lighting", "face", "body structure", "skin texture", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-09-17T00:32:55Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - photo - actress - anime - game - portraits - land - contrast - dark - anatomy - hands - lighting - face - body structure - skin texture - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1584358/ultra-realistic-by-stable-yogi-illus?modelVersionId=2222714). This model created by [Stable_Yogi](https://civitai.com/user/Stable_Yogi).
hdnfnfn/blockassist-bc-giant_leggy_rhino_1758069859
hdnfnfn
2025-09-17T00:44:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "giant leggy rhino", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:44:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - giant leggy rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Rawan7/smart_chat_ha
Rawan7
2025-09-17T00:43:21Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "lora", "transformers", "text-generation", "conversational", "arxiv:1910.09700", "base_model:microsoft/Phi-3-mini-4k-instruct", "region:us" ]
text-generation
2025-09-17T00:35:39Z
--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:microsoft/Phi-3-mini-4k-instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
Max8678/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_crested_cobra
Max8678
2025-09-17T00:42:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am barky_crested_cobra", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T22:12:13Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am barky_crested_cobra --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Max8678/Qwen3-0.6B-Gensyn-Swarm-coiled_fluffy_chinchilla
Max8678
2025-09-17T00:40:39Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am coiled_fluffy_chinchilla", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T22:10:04Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am coiled_fluffy_chinchilla --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Max8678/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-domestic_regal_caterpillar
Max8678
2025-09-17T00:39:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am domestic_regal_caterpillar", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T23:23:29Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am domestic_regal_caterpillar --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nick976786/Qwen3-0.6B-Gensyn-Swarm-deadly_thick_squid
Nick976786
2025-09-17T00:39:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am deadly_thick_squid", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T23:27:38Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am deadly_thick_squid --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758069481
devivodowdlel
2025-09-17T00:39:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:39:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
blemond/RAG_press_multilingual_e5_large
blemond
2025-09-17T00:37:47Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:76932", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:intfloat/multilingual-e5-large", "base_model:finetune:intfloat/multilingual-e5-large", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-09T10:53:16Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:76932 - loss:MultipleNegativesRankingLoss base_model: intfloat/multilingual-e5-large widget: - source_sentence: 'query: ATM Adaptation Layer 2의 약어는 무엇인가요?' sentences: - 'passage: 2 Transmit 2 Receive (기술)' - 'passage: Alternating Current (개념)' - 'passage: AAL2 (기술)' - source_sentence: 'query: AC의 접근 클래스 C0부터 C15까지의 기능은 무엇인가요?' sentences: - 'passage: Access Class (C0 to C15) (개념)' - 'passage: 3 Dimension-Through Silicon Via (기술)' - 'passage: ACAP (Conceptual)' - source_sentence: 'query: What is the abbreviation for Alarm Agent Handling Block?' sentences: - 'passage: ATM Connection establishment/release Control Block (기술)' - 'passage: AAGHB (Technical)' - 'passage: Account Card Calling (활용)' - source_sentence: 'query: ABPL의 ATM 기본 속도 물리 계층 장치는 어떻게 구성되어 있나요?' sentences: - 'passage: ATM Base Rate Physical Layer Unit (기술)' - 'passage: 3A (개념)' - 'passage: 5GTF (Conceptual)' - source_sentence: 'query: How does the triple encryption process of 3-DES enhance security?' sentences: - 'passage: 5th Generation Technical Forum (Conceptual)' - 'passage: Triple Data Encryption Standard (Technical)' - 'passage: ABCDEF (활용)' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on intfloat/multilingual-e5-large results: - task: type: information-retrieval name: Information Retrieval dataset: name: e5 eval real type: e5-eval-real metrics: - type: cosine_accuracy@1 value: 0.8686666666666667 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.969 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9832 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9922 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8686666666666667 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.323 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19664000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09922000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.8686666666666667 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.969 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9832 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9922 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9376619313817377 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9193550000000039 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9197550584627825 name: Cosine Map@100 --- # SentenceTransformer based on intfloat/multilingual-e5-large This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the train dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision 0dc5580a448e4284468b8909bae50fa925907bc5 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - train <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'}) (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'query: How does the triple encryption process of 3-DES enhance security?', 'passage: Triple Data Encryption Standard (Technical)', 'passage: ABCDEF (활용)', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.8389, 0.1546], # [0.8389, 1.0000, 0.0850], # [0.1546, 0.0850, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `e5-eval-real` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8687 | | cosine_accuracy@3 | 0.969 | | cosine_accuracy@5 | 0.9832 | | cosine_accuracy@10 | 0.9922 | | cosine_precision@1 | 0.8687 | | cosine_precision@3 | 0.323 | | cosine_precision@5 | 0.1966 | | cosine_precision@10 | 0.0992 | | cosine_recall@1 | 0.8687 | | cosine_recall@3 | 0.969 | | cosine_recall@5 | 0.9832 | | cosine_recall@10 | 0.9922 | | **cosine_ndcg@10** | **0.9377** | | cosine_mrr@10 | 0.9194 | | cosine_map@100 | 0.9198 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: train * Size: 76,932 training samples * Columns: <code>0</code> and <code>1</code> * Approximate statistics based on the first 1000 samples: | | 0 | 1 | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 19.44 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 12.28 tokens</li><li>max: 27 tokens</li></ul> | * Samples: | 0 | 1 | |:--------------------------------------------------------------------|:------------------------------------------------------------------| | <code>query: 3D-TSV 기술의 구조는 어떻게 되어 있나요?</code> | <code>passage: 3 Dimension-Through Silicon Via (기술)</code> | | <code>query: What is the structure of the 3D-TSV technology?</code> | <code>passage: 3 Dimension-Through Silicon Via (Technical)</code> | | <code>query: 3 Dimension-Through Silicon Via의 줄임말이 뭐죠?</code> | <code>passage: 3D-TSV (기술)</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | e5-eval-real_cosine_ndcg@10 | |:------:|:----:|:-------------:|:---------------------------:| | 0.0008 | 1 | 3.1575 | - | | 0.0831 | 100 | 1.6593 | - | | 0.1663 | 200 | 0.1298 | 0.8389 | | 0.2494 | 300 | 0.0848 | - | | 0.3325 | 400 | 0.0716 | 0.8808 | | 0.4156 | 500 | 0.0504 | - | | 0.4988 | 600 | 0.0421 | 0.9033 | | 0.5819 | 700 | 0.042 | - | | 0.6650 | 800 | 0.0398 | 0.9095 | | 0.7481 | 900 | 0.0384 | - | | 0.8313 | 1000 | 0.0383 | 0.9111 | | 0.9144 | 1100 | 0.0321 | - | | 0.9975 | 1200 | 0.0317 | 0.9186 | | 1.0806 | 1300 | 0.0299 | - | | 1.1638 | 1400 | 0.0302 | 0.9161 | | 1.2469 | 1500 | 0.025 | - | | 1.3300 | 1600 | 0.0199 | 0.9261 | | 1.4131 | 1700 | 0.0179 | - | | 1.4963 | 1800 | 0.0117 | 0.9305 | | 1.5794 | 1900 | 0.013 | - | | 1.6625 | 2000 | 0.012 | 0.9308 | | 1.7456 | 2100 | 0.0137 | - | | 1.8288 | 2200 | 0.0141 | 0.9309 | | 1.9119 | 2300 | 0.0127 | - | | 1.9950 | 2400 | 0.0115 | 0.9332 | | 2.0781 | 2500 | 0.0114 | - | | 2.1613 | 2600 | 0.011 | 0.9351 | | 2.2444 | 2700 | 0.0107 | - | | 2.3275 | 2800 | 0.0087 | 0.9357 | | 2.4106 | 2900 | 0.0084 | - | | 2.4938 | 3000 | 0.0059 | 0.9366 | | 2.5769 | 3100 | 0.0062 | - | | 2.6600 | 3200 | 0.0071 | 0.9377 | | 2.7431 | 3300 | 0.0072 | - | | 2.8263 | 3400 | 0.0079 | 0.9376 | | 2.9094 | 3500 | 0.0071 | - | | 2.9925 | 3600 | 0.0068 | 0.9376 | | -1 | -1 | - | 0.9377 | ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.56.1 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 3.6.0 - Tokenizers: 0.22.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Max8678/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_lumbering_wolf
Max8678
2025-09-17T00:36:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am foxy_lumbering_wolf", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T23:24:39Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am foxy_lumbering_wolf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
judsfdf/USABLE_5_libre_mascompleto
judsfdf
2025-09-17T00:34:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma2", "trl", "en", "base_model:unsloth/gemma-2-9b-bnb-4bit", "base_model:finetune:unsloth/gemma-2-9b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-17T00:34:16Z
--- base_model: unsloth/gemma-2-9b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** judsfdf - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hdnfnfn/blockassist-bc-shaggy_elusive_giraffe_1758068939
hdnfnfn
2025-09-17T00:29:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy elusive giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:29:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy elusive giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758068865
devivodowdlel
2025-09-17T00:28:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:28:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luckeciano/Qwen-2.5-7B-GRPO-Base-SGD-v3_3356
luckeciano
2025-09-17T00:28:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T19:14:50Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-SGD-v3_3356 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-SGD-v3_3356 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-SGD-v3_3356", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/0bdtzw7b) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gabriellarson/Tongyi-DeepResearch-30B-A3B-GGUF
gabriellarson
2025-09-17T00:23:27Z
0
0
null
[ "gguf", "base_model:Alibaba-NLP/Tongyi-DeepResearch-30B-A3B", "base_model:quantized:Alibaba-NLP/Tongyi-DeepResearch-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-16T22:42:55Z
--- license: apache-2.0 base_model: - Alibaba-NLP/Tongyi-DeepResearch-30B-A3B --- # Introduction We present **Tongyi DeepResearch**, an agentic large language model featuring 30 billion total parameters, with only 3 billion activated per token. Developed by Tongyi Lab, the model is specifically designed for **long-horizon, deep information-seeking** tasks. Tongyi-DeepResearch demonstrates state-of-the-art performance across a range of agentic search benchmarks, including BrowserComp-EN, BrowserComp-ZH, GAIA, Humanity's Last Exam, xbench-DeepSearch, and WebWalkerQA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63fc4c00a3c067e62899d32b/OhQCYYJu1LhrS446Qct5D.png) ## Key Features - ⚙️ **Fully automated synthetic data generation pipeline**: We design a highly scalable data synthesis pipeline, which is fully automatic and empowers agentic pre-training, supervised fine-tuning, and reinforcement learning. - 🔄 **Large-scale continual pre-training on agentic data**: Leveraging diverse, high-quality agentic interaction data to extend model capabilities, maintain freshness, and strengthen reasoning performance. - 🔁 **End-to-end reinforcement learning**: We employ a strictly on-policy RL approach based on a customized Group Relative Policy Optimization framework, with token-level policy gradients, leave-one-out advantage estimation, and selective filtering of negative samples to stabilize training in a non‑stationary environment. - 🤖 **Agent Inference Paradigm Compatibility**: At inference, Tongyi-DeepResearch is compatible with two inference paradigms: ReAct, for rigorously evaluating the model's core intrinsic abilities, and an IterResearch-based 'Heavy' mode, which uses a test-time scaling strategy to unlock the model's maximum performance ceiling. ## Download You can download the model then run the inference scipts in https://github.com/Alibaba-NLP/DeepResearch. ```bibtex @misc{tongyidr, author={Tongyi DeepResearch Team}, title={Tongyi-DeepResearch}, year={2025}, howpublished={\url{https://github.com/Alibaba-NLP/DeepResearch}} } ```
ChenWu98/numina_olmo_2_7b_sft_teachers_no_reasoning_source_split_1_2048
ChenWu98
2025-09-17T00:21:56Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:allenai/OLMo-2-1124-7B", "base_model:finetune:allenai/OLMo-2-1124-7B", "endpoints_compatible", "region:us" ]
null
2025-09-17T00:17:01Z
--- base_model: allenai/OLMo-2-1124-7B library_name: transformers model_name: numina_olmo_2_7b_sft_teachers_no_reasoning_source_split_1_2048 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_olmo_2_7b_sft_teachers_no_reasoning_source_split_1_2048 This model is a fine-tuned version of [allenai/OLMo-2-1124-7B](https://huggingface.co/allenai/OLMo-2-1124-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/fvkqt89k) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gn64/2509_japanese_reranker_onnx
gn64
2025-09-17T00:21:19Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2025-09-17T00:18:13Z
--- license: apache-2.0 ---
hdnfnfn/blockassist-bc-gilded_patterned_mouse_1758068324
hdnfnfn
2025-09-17T00:18:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded patterned mouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:18:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded patterned mouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758068249
devivodowdlel
2025-09-17T00:18:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:18:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dhsjksid/blockassist-bc-beaked_lumbering_cockroach_1758068202
dhsjksid
2025-09-17T00:16:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked lumbering cockroach", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:16:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked lumbering cockroach --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Baebii/Qwen3-0.6B-Gensyn-Swarm-bipedal_extinct_owl
Baebii
2025-09-17T00:16:16Z
5
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am bipedal_extinct_owl", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-06T13:03:09Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am bipedal_extinct_owl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmfn6k4yh09txx0n0f30486o8_cmfn6t92209u4x0n08w8smzys
BootesVoid
2025-09-17T00:15:28Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-17T00:15:27Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MUJER --- # Cmfn6K4Yh09Txx0N0F30486O8_Cmfn6T92209U4X0N08W8Smzys <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MUJER` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MUJER", "lora_weights": "https://huggingface.co/BootesVoid/cmfn6k4yh09txx0n0f30486o8_cmfn6t92209u4x0n08w8smzys/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmfn6k4yh09txx0n0f30486o8_cmfn6t92209u4x0n08w8smzys', weight_name='lora.safetensors') image = pipeline('MUJER').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmfn6k4yh09txx0n0f30486o8_cmfn6t92209u4x0n08w8smzys/discussions) to add images that show off what you’ve made with this LoRA.
ggmancer/Qwen3-0.6B-Gensyn-Swarm-elusive_dense_horse
ggmancer
2025-09-17T00:15:10Z
4
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am elusive_dense_horse", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T14:32:39Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am elusive_dense_horse --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_nasty_pheasant
razor534
2025-09-17T00:15:04Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am stocky_nasty_pheasant", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-27T20:44:20Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am stocky_nasty_pheasant --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggmancer/Smoothie-Qwen3-1.7B-Gensyn-Swarm-hardy_stalking_manatee
ggmancer
2025-09-17T00:14:59Z
10
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am hardy_stalking_manatee", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T14:32:40Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am hardy_stalking_manatee --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gunahkarcasper/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tricky_powerful_bobcat
gunahkarcasper
2025-09-17T00:14:21Z
14
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am tricky_powerful_bobcat", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-27T19:05:06Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am tricky_powerful_bobcat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robbiemu/MobileLLM-R1-950M-mlx-fp16
robbiemu
2025-09-17T00:14:17Z
0
0
mlx
[ "mlx", "safetensors", "llama4_text", "facebook", "meta", "pytorch", "mobilellm", "text-generation", "conversational", "en", "base_model:facebook/MobileLLM-R1-950M", "base_model:finetune:facebook/MobileLLM-R1-950M", "license:other", "region:us" ]
text-generation
2025-09-17T00:11:21Z
--- license: other license_name: fair-noncommercial-research extra_gated_prompt: "FAIR Noncommercial Research License v1 Last Updated: August 18,\ \ 2025\n“Acceptable Use Policy” means the FAIR Acceptable Use Policy, applicable\ \ to Research Materials, that is incorporated into this Agreement.\n“Agreement”\ \ means the terms and conditions for use, reproduction, distribution and modification\ \ of the Research Materials set forth herein.\n\n“Documentation” means the specifications,\ \ manuals and documentation accompanying Research Materials distributed by Meta.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Meta” or\ \ “we” means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ “Noncommercial Research Uses” means noncommercial research use cases related to\ \ research, development, education, processing, or analysis and in each case, is\ \ not primarily intended for commercial advantage or monetary compensation to you\ \ or others.\n“Research Materials” means, collectively, Documentation and the models,\ \ software and algorithms, including machine-learning model code, trained model\ \ weights, inference-enabling code, training-enabling code, fine-tuning enabling\ \ code, demonstration materials and other elements of the foregoing distributed\ \ by Meta and made available under this Agreement.\nBy clicking “I Accept” below\ \ or by using or distributing any portion or element of the Research Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ \na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Research Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Research Materials.\ \ \nb. Redistribution and Use. i. You will not use the Research Materials or\ \ any outputs or results of the Research Materials in connection with any commercial\ \ uses or for any uses other than Noncommercial Research Uses;\n\nii. Distribution\ \ of Research Materials, and any derivative works thereof, are subject to the terms\ \ of this Agreement. If you distribute or make the Research Materials, or any derivative\ \ works thereof, available to a third party, you may only do so under the terms\ \ of this Agreement. You shall also provide a copy of this Agreement to such third\ \ party.\n\niii. If you submit for publication the results of research you perform\ \ on, using, or otherwise in connection with Research Materials, you must acknowledge\ \ the use of Research Materials in your publication.\n\niv. Your use of the Research\ \ Materials must comply with applicable laws and regulations (including Trade Control\ \ Laws) and adhere to the FAIR Acceptable Use Policy, which is hereby incorporated\ \ by reference into this Agreement. 2. User Support. Your Noncommercial Research\ \ Use of the Research Materials is done at your own discretion; Meta does not process\ \ any information nor provide any service in relation to such use. Meta is under\ \ no obligation to provide any support services for the Research Materials. Any\ \ support provided is “as is”, “with all faults”, and without warranty of any kind.\n\ \n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE RESEARCH MATERIALS\ \ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT\ \ WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS\ \ AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,\ \ MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\ \ FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE RESEARCH MATERIALS\ \ AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE RESEARCH MATERIALS AND ANY\ \ OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS\ \ AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT,\ \ NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR\ \ ANY LOST PROFITS OR ANY DIRECT OR INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\na. Subject\ \ to Meta’s ownership of Research Materials and derivatives made by or for Meta,\ \ with respect to any derivative works and modifications of the Research Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nb. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Research Materials, outputs or results, or any\ \ portion of any of the foregoing, constitutes infringement of intellectual property\ \ or other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Research Materials.\n6. Term and Termination. The term of this Agreement\ \ will commence upon your acceptance of this Agreement or access to the Research\ \ Materials and will continue in full force and effect until terminated in accordance\ \ with the terms and conditions herein. Meta may terminate this Agreement if you\ \ are in breach of any term or condition of this Agreement. Upon termination of\ \ this Agreement, you shall delete and cease use of the Research Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement. \n\n8. Modifications and Amendments.\ \ Meta may modify this Agreement from time to time; provided that they are similar\ \ in spirit to the current version of the Agreement, but may differ in detail to\ \ address new problems or concerns. All such changes will be effective immediately.\ \ Your continued use of the Research Materials after any modification to this Agreement\ \ constitutes your agreement to such modification. Except as provided in this Agreement,\ \ no modification or addition to any provision of this Agreement will be binding\ \ unless it is in writing and signed by an authorized representative of both you\ \ and Meta.\n\nFAIR Acceptable Use Policy \nThe Fundamental AI Research (FAIR) team\ \ at Meta seeks to further understanding of new and existing research domains with\ \ the mission of advancing the state-of-the-art in artificial intelligence through\ \ open research for the benefit of all. \nAs part of this mission, Meta makes certain\ \ research materials available for noncommercial research use. Meta is committed\ \ to promoting the safe and responsible use of such research materials. \nProhibited\ \ Uses\nYou agree you will not use, or allow others to use, Research Materials to:\n\ Violate the law or others’ rights, including to: Engage in, promote, generate, contribute\ \ to, encourage, plan, incite, or further illegal or unlawful activity or content,\ \ such as: Violence or terrorism Exploitation or harm to children, including the\ \ solicitation, creation, acquisition, or dissemination of child exploitative content\ \ or failure to report Child Sexual Abuse Material Human trafficking, exploitation,\ \ and sexual violence The illegal distribution of information or materials to minors,\ \ including obscene materials, or failure to employ legally required age-gating\ \ in connection with such information or materials. Sexual solicitation Any other\ \ criminal activity\nEngage in, promote, incite, or facilitate the harassment, abuse,\ \ threatening, or bullying of individuals or groups of individuals\nEngage in, promote,\ \ incite, or facilitate discrimination or other unlawful or harmful conduct in the\ \ provision of employment, employment benefits, credit, housing, other economic\ \ benefits, or other essential goods and services\nEngage in the unauthorized or\ \ unlicensed practice of any profession including, but not limited to, financial,\ \ legal, medical/health, or related professional practices\nCollect, process, disclose,\ \ generate, or infer health, demographic, or other sensitive personal or private\ \ information about individuals without rights and consents required by applicable\ \ laws\nEngage in or facilitate any action or generate any content that infringes,\ \ misappropriates, or otherwise violates any third-party rights, including the outputs\ \ or results of any technology using FAIR research materials\nCreate, generate,\ \ or facilitate the creation of malicious code, malware, computer viruses or do\ \ anything else that could disable, overburden, interfere with or impair the proper\ \ working, integrity, operation or appearance of a website or computer system\n\ 2. Engage in, promote, incite, facilitate, or assist in the planning or development\ \ of activities that present a risk of death or bodily harm to individuals, including\ \ use of research artifacts related to the following:\nMilitary, warfare, nuclear\ \ industries or applications, espionage, use for materials or activities that are\ \ subject to the International Traffic Arms Regulations (ITAR) maintained by the\ \ United States Department of State\nGuns and illegal weapons (including weapon\ \ development)\nIllegal drugs and regulated/controlled substances\nOperation of\ \ critical infrastructure, transportation technologies, or heavy machinery\nSelf-harm\ \ or harm to others, including suicide, cutting, and eating disorders\nAny content\ \ intended to incite or promote violence, abuse, or any infliction of bodily harm\ \ to an individual\n3. Intentionally deceive or mislead others, including use of\ \ FAIR Research Materials related to the following:\nGenerating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\nGenerating, promoting, or\ \ furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\nGenerating, promoting, or further distributing spam\n\ Impersonating another individual without consent, authorization, or legal right\n\ Representing that outputs of FAIR research materials or outputs from technology\ \ using FAIR research materials are human-generated\nGenerating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement\n4. Fail to appropriately disclose to end users any known dangers of\ \ your Research Materials.\nPlease report any violation of this Policy or other\ \ problems that could lead to a violation of this Policy by submitting a report\ \ here [https://docs.google.com/forms/d/e/1FAIpQLSeb11cryAopJ7LNrC4nxEUXrHY26hfkXQMf_uH-oFgA3WlYZQ/viewform].\ \ \n" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_heading: Please be sure to provide your full legal name, date of birth, and full organization name with all corporate identifiers. Avoid the use of acronyms and special characters. Failure to follow these instructions may prevent you from accessing this model and others on Hugging Face. You will not have the ability to edit this form after submission, so please ensure all information is accurate. language: - en library_name: mlx tags: - facebook - meta - pytorch - mobilellm - mlx base_model: facebook/MobileLLM-R1-950M pipeline_tag: text-generation --- # robbiemu/MobileLLM-R1-950M-mlx-fp16 This model [robbiemu/MobileLLM-R1-950M-mlx-fp16](https://huggingface.co/robbiemu/MobileLLM-R1-950M-mlx-fp16) was converted to MLX format from [facebook/MobileLLM-R1-950M](https://huggingface.co/facebook/MobileLLM-R1-950M) using mlx-lm version **0.27.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("robbiemu/MobileLLM-R1-950M-mlx-fp16") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
ggmancer/Qwen3-0.6B-Gensyn-Swarm-twitchy_skilled_aardvark
ggmancer
2025-09-17T00:14:15Z
7
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am twitchy_skilled_aardvark", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-03T01:47:30Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am twitchy_skilled_aardvark --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SIGTIR/Qwen3-0.6B-Gensyn-Swarm-mighty_melodic_bison
SIGTIR
2025-09-17T00:13:58Z
14
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am mighty_melodic_bison", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-06T00:02:44Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am mighty_melodic_bison --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdnfnfn/blockassist-bc-finicky_finicky_warthog_1758068018
hdnfnfn
2025-09-17T00:13:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky finicky warthog", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:13:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky finicky warthog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggmancer/Qwen3-0.6B-Gensyn-Swarm-curious_whistling_porpoise
ggmancer
2025-09-17T00:12:38Z
4
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am curious_whistling_porpoise", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T14:34:35Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am curious_whistling_porpoise --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggmancer/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_gilded_crab
ggmancer
2025-09-17T00:12:14Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am tropical_gilded_crab", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T14:32:39Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am tropical_gilded_crab --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NexVeridian/Qwen3-0.6B-3bit
NexVeridian
2025-09-17T00:10:41Z
14
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "base_model:Qwen/Qwen3-0.6B", "base_model:quantized:Qwen/Qwen3-0.6B", "license:apache-2.0", "3-bit", "region:us" ]
text-generation
2025-08-29T04:46:41Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-0.6B tags: - mlx --- # NexVeridian/Qwen3-0.6B-3bit This model [NexVeridian/Qwen3-0.6B-3bit](https://huggingface.co/NexVeridian/Qwen3-0.6B-3bit) was converted to MLX format from [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) using mlx-lm version **0.27.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("NexVeridian/Qwen3-0.6B-3bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
WyattTheSkid/gpt-oss-120b_Abliterated_GGUF
WyattTheSkid
2025-09-17T00:09:55Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-17T00:09:55Z
--- license: apache-2.0 ---
hdnfnfn/blockassist-bc-shaggy_melodic_cobra_1758067711
hdnfnfn
2025-09-17T00:08:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy melodic cobra", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:08:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy melodic cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ThuPhong/DeepSeek-R1-OODA-COT-unmerged
ThuPhong
2025-09-17T00:06:49Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-17T00:06:43Z
--- base_model: unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ThuPhong - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xnftraff/blockassist
xnftraff
2025-09-17T00:05:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly freckled deer", "arxiv:2504.07091", "region:us" ]
null
2025-09-09T20:05:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly freckled deer --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hdnfnfn/blockassist-bc-hairy_crested_fox_1758067404
hdnfnfn
2025-09-17T00:03:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy crested fox", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T00:03:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy crested fox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ELHSI/llama-3.1-8bi-ft-dx-ru-mctd-v1
ELHSI
2025-09-17T00:02:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-17T00:02:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vonmises69/Affine-5DD1c8yif24RCgyxRkj1faGYYpNCu3CmnynxMZJgYs3EvHWd
vonmises69
2025-09-17T00:01:43Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "arxiv:2508.10925", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-09-17T00:01:42Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm --- <p align="center"> <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss-120b</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-120b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-120b ollama pull gpt-oss:120b ollama run gpt-oss:120b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-120b lms get openai/gpt-oss-120b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-120b huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware. # Citation ```bibtex @misc{openai2025gptoss120bgptoss20bmodel, title={gpt-oss-120b & gpt-oss-20b Model Card}, author={OpenAI}, year={2025}, eprint={2508.10925}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.10925}, } ```
vonmises69/Affine-5CPuH17C8EL66yCqRHrFCJKAaiZ6D7qxYQnFKsFtYKuyX3QU
vonmises69
2025-09-17T00:01:32Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "arxiv:2508.10925", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-09-17T00:01:32Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm --- <p align="center"> <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss-120b</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-120b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-120b ollama pull gpt-oss:120b ollama run gpt-oss:120b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-120b lms get openai/gpt-oss-120b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-120b huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware. # Citation ```bibtex @misc{openai2025gptoss120bgptoss20bmodel, title={gpt-oss-120b & gpt-oss-20b Model Card}, author={OpenAI}, year={2025}, eprint={2508.10925}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.10925}, } ```
ZYXue/prompt_id_2_qwen2-VL-7B-Instruct-syn-count-lora
ZYXue
2025-09-16T23:58:49Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct", "region:us" ]
null
2025-09-16T23:56:35Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
ZYXue/prompt_id_0_qwen2-VL-7B-Instruct-syn-count-lora
ZYXue
2025-09-16T23:58:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct", "region:us" ]
null
2025-09-16T23:56:35Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
hdnfnfn/blockassist-bc-noisy_elusive_grouse_1758067097
hdnfnfn
2025-09-16T23:58:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "noisy elusive grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:58:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - noisy elusive grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
theailearner/FraudShield-NL2pseudoSQL-HistoricalOnly-Qwen3-8B-v7-mini
theailearner
2025-09-16T23:55:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-16T23:54:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jweston/SmolLM3-Custom-SFT
jweston
2025-09-16T23:49:48Z
0
0
transformers
[ "transformers", "safetensors", "smollm3", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM3-3B-Base", "base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T16:21:52Z
--- base_model: HuggingFaceTB/SmolLM3-3B-Base library_name: transformers model_name: SmolLM3-Custom-SFT tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for SmolLM3-Custom-SFT This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jweston/SmolLM3-Custom-SFT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758066483
hdnfnfn
2025-09-16T23:48:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:48:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758066401
devivodowdlel
2025-09-16T23:48:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:47:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thepatch/magenta-ft
thepatch
2025-09-16T23:42:22Z
0
0
null
[ "tensorboard", "base_model:google/magenta-realtime", "base_model:finetune:google/magenta-realtime", "license:apache-2.0", "region:us" ]
null
2025-09-12T01:04:39Z
--- license: apache-2.0 base_model: - google/magenta-realtime ---
aliRafik/GPT-OSS-20B-Agentic-SwissKnife-Tool-calling
aliRafik
2025-09-16T23:39:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-14T18:22:37Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** aliRafik - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # GPT-OSS-20B-Agentic-SwissKnife-Tool-Calling **AliRafik/GPT-OSS-20B-Agentic-SwissKnife-Tool-calling-full-model** is a Lora Finetunned -parameter causal language model fine-tuned for **tool-augmented reasoning**. This model can intelligently detect when to use external tools and execute them in a structured way to provide precise answers. It has been trained on a diverse set of tasks including mathematical computation, Python code execution, and legal document analysis. --- ## 🚀 Features This model can perform a wide variety of tool-augmented tasks, including but not limited to: * **Mathematical and computational tools** * `execute_python_code`: Safely run Python code. * `calculate_mean`, `calculate_median`, `calculate_sum`, `calculate_average`: Compute statistics over numerical data. * `add_numbers`, `multiply_numbers`: Perform basic arithmetic. * `solve_linear_equation`, `solve_quadratic_equation`, `solve_cubic_equation`: Solve linear, quadratic, and cubic equations. * **Legal and document analysis tools** * `analyze_contract`: Extract key elements from contracts. * `extract_legal_entities`: Identify entities in legal texts. * `detect_legal_risks`: Detect potential risks in contracts or legal documents. * `compare_legal_documents`: Compare two legal documents for similarity. * `summarize_legal_document`, `legal_case_summary`: Provide summaries of legal documents or cases. * `check_compliance`, `check_regulation`, `get_related_legal_rules`, `search_legal_precedents`: Assist with regulatory and compliance checks. * **Data processing tools** * `sort_list`: Sort numerical data in ascending or descending order. * `calculate_variance`: Compute variance of numbers. > The model can automatically detect which tool is needed for a given query and generate structured tool calls to facilitate execution. --- ## 📦 Model Usage can use this model with Hugging Face Transformers: ```python from unsloth import FastLanguageModel from peft import PeftModel # Load the base model first base_model, tokenizer = FastLanguageModel.from_pretrained( "unsloth/gpt-oss-20b", load_in_4bit=True, full_finetuning=False, ) model = PeftModel.from_pretrained( base_model, "aliRafik/GPT-OSS-20B-Agentic-SwissKnife-Tool-calling" ) prompt = "Calculate the mean and variance of [3, 5, 7, 9]." inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` > ⚠️ **Note:** The model outputs structured tool calls for tasks. You need to parse these tool calls and execute them programmatically to get the final results. as the following code example of parsing the tools --- ```python from transformers import AutoModelForCausalLM, AutoTokenizer import json # Load model and tokenizer from unsloth import FastLanguageModel from peft import PeftModel # Load the base model first base_model, tokenizer = FastLanguageModel.from_pretrained( "unsloth/gpt-oss-20b", load_in_4bit=True, full_finetuning=False, ) model = PeftModel.from_pretrained( base_model, "aliRafik/GPT-OSS-20B-Agentic-SwissKnife-Tool-calling" ) # Define tools def execute_tool(tool_name, args): if tool_name == "add_numbers": return args["a"] + args["b"] elif tool_name == "multiply_numbers": return args["a"] * args["b"] elif tool_name == "calculate_sum": return sum(args["numbers"]) elif tool_name == "calculate_mean": numbers = args["numbers"] return sum(numbers)/len(numbers) elif tool_name == "calculate_median": numbers = sorted(args["numbers"]) n = len(numbers) mid = n // 2 if n % 2 == 0: return (numbers[mid-1] + numbers[mid]) / 2 return numbers[mid] else: return f"Tool {tool_name} not implemented." # Example messages messages = [ {"role": "system", "content": """ You are an AI assistant with access to a set of tools. Follow the numbered workflow with <think> reasoning and <final_answer> for conclusions. """}, {"role": "user", "content": "Calculate the sum of [3,5,7,9] using a tool"} ] # Apply chat template inputs = tokenizer.apply_chat_template( messages, add_system_prompt=False, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to(model.device) # Generate initial response outputs = model.generate(**inputs, max_new_tokens=512) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("=== MODEL OUTPUT ===") print(generated_text) # Parse tool call def extract_tool_call(text): try: start = text.index('{"tool_calls":') end = text.index('}', start) + 1 tool_call_json = text[start:end] data = json.loads(tool_call_json) return data["tool_calls"][0] except ValueError: return None tool_call = extract_tool_call(generated_text) # Execute tool and feed back if tool_call: tool_name = tool_call["function"]["name"] args = json.loads(tool_call["function"]["arguments"]) tool_result = execute_tool(tool_name, args) tool_message = {"role": "tool", "content": json.dumps({"tool_call_id": tool_call["id"], "output": str(tool_result)})} messages.append(tool_message) # Regenerate final answer inputs = tokenizer.apply_chat_template( messages, add_system_prompt=False, add_generation_prompt=True, return_tensors="pt", return_dict=True ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=1024) final_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("\n=== FINAL ANSWER ===") print(final_text) else: print("\nNo tool call detected. Final output:") print(generated_text) ``` This loop allows the model to detect tool calls, execute the tools, and return the final answer automatically. ```python #example of the Great Output. user: Calculate the sum of [3,5,7,9] using a toolassistantanalysisThe user asks: "Calculate the sum of [3,5,7,9] using a tool". We have tool 4: calculate_sum. So choose tool calculate_sum, with arguments numbers=[3,5,7,9]. Provide tool call id unique. Then we expect tool result message with tool_call_id equal to that id. So step 1: understand query: sum of numbers. Step 2: decide tool use: tool 4. Step 3: select and argue tool: tool 4, calculate_sum, because we need sum. Step 4: execute tool: tool call id, e.g., "call_1". Provide tool call: {"tool_calls": [{"id": "call_1", "type": "function", "function": {"name": "calculate_sum", "arguments": "{"numbers": [3, 5, 7, 9]}"}]}} We must format tool call arguments as a JSON string. So generate tool call with id "call_1". Then the system will provide tool result with {"tool_call_id": "call_1", "output": ...}. We then process that output. The expected output is the sum: 3+5+7+9 = 24. So tool output should be "24" (string). So we will receive tool result: {"tool_call_id": "call_1", "output": "24"}. Then we process tool results: we have a list of tool results with one entry: {"tool_call_id": "call_1", "output": "24"}. We then formulate final response: ": 24" (with colon as per example). And conclude with <final_answer>. So assistant message should have role "assistant", content with the reasoning steps, and tool_calls with the tool call. Then the system will provide tool result. Then we will provide final answer. Thus, output assistant message now.assistantfinal{"tool_calls": [{"id": "call_1", "type": "function", "function": {"name": "calculate_sum", "arguments": "{"numbers": [3, 5, 7, 9]}"}]}}<think>Selected tool calculate_sum (tool 4) to compute the sum of the list [3, 5, 7, 9]. Arguments: {"numbers": [3, 5, 7, 9]}</think><final_answer>: 24</final_answer> ``` ## 💾 Training Data The model has been trained on curated datasets for **tool-calling tasks**, combining: * Python code execution tasks * Mathematical computation tasks * Legal and document analysis tasks You can access the training data here: \[] --- ## 📃 Example Tool Calls ```json {"tool": "calculate_mean", "args": {"numbers": [3,5,7,9]}} {"tool": "execute_python_code", "args": {"code": "sum([1,2,3,4])"}} {"tool": "analyze_contract", "args": {"text": "This contract states..." }} ``` > The model generates these JSON-structured calls, which should then be executed in application loop. --- ## ⚡️ Capabilities * Tool-aware reasoning with dynamic execution. * Multi-domain: legal analysis, computational tasks, and Python execution. * Generates structured outputs ready for programmatic execution. --- ## 📜 License \[Specify license here, e.g., MIT, Apache-2.0]
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758065784
devivodowdlel
2025-09-16T23:37:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:37:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AlekseyCalvin/LYRICAL_MT_ru2en_v18_Vikhr12b_r64orpo
AlekseyCalvin
2025-09-16T23:35:34Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24", "base_model:finetune:Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T23:21:39Z
--- base_model: Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 tags: - text-generation-inference - transformers - unsloth - mistral license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** AlekseyCalvin - **License:** apache-2.0 - **Finetuned from model :** Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
utkukaya12/Qwen3-0.6B-Gensyn-Swarm-clawed_pouncing_caribou
utkukaya12
2025-09-16T23:33:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am clawed_pouncing_caribou", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T23:22:21Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am clawed_pouncing_caribou --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gabe-Thomp/lr2.0e-04_assistant_only_lora
Gabe-Thomp
2025-09-16T23:33:32Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "sft", "trl", "alignment-handbook", "conversational", "dataset:Gabe-Thomp/gemma-bayesian-training", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T22:50:36Z
--- base_model: google/gemma-2-9b-it datasets: Gabe-Thomp/gemma-bayesian-training library_name: transformers model_name: lr2.0e-04_assistant_only_lora tags: - generated_from_trainer - sft - trl - alignment-handbook licence: license --- # Model Card for lr2.0e-04_assistant_only_lora This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on the [Gabe-Thomp/gemma-bayesian-training](https://huggingface.co/datasets/Gabe-Thomp/gemma-bayesian-training) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Gabe-Thomp/lr2.0e-04_assistant_only_lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gabe-t-asher-nc-state-university/huggingface/runs/9hdy225s) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.54.0 - Pytorch: 2.6.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hdnfnfn/blockassist-bc-grazing_sly_hummingbird_1758065563
hdnfnfn
2025-09-16T23:32:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing sly hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:32:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing sly hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mozila80/blockassist
mozila80
2025-09-16T23:31:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful huge peacock", "arxiv:2504.07091", "region:us" ]
null
2025-09-09T23:54:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful huge peacock --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fasdertaw/blockassist
fasdertaw
2025-09-16T23:30:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "horned barky cheetah", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:21:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - horned barky cheetah --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
utkukaya12/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-clawed_pouncing_caribou
utkukaya12
2025-09-16T23:28:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am clawed_pouncing_caribou", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T23:21:11Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am clawed_pouncing_caribou --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758064553
devivodowdlel
2025-09-16T23:18:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:16:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hdnfnfn/blockassist-bc-shaggy_melodic_cobra_1758064641
hdnfnfn
2025-09-16T23:17:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy melodic cobra", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:17:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy melodic cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hdnfnfn/blockassist-bc-noisy_elusive_grouse_1758064027
hdnfnfn
2025-09-16T23:07:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "noisy elusive grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T23:07:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - noisy elusive grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
prashantrb111/10KDATA
prashantrb111
2025-09-16T23:01:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-09-10T08:20:53Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: meta-llama/Llama-3.2-1B-Instruct widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
kevinshin/qwen3-1.7b-rpo-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-rethink-pos
kevinshin
2025-09-16T22:57:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "dpo", "trl", "conversational", "dataset:kevinshin/wildchat-creative-writing-3k-critique-v2", "arxiv:2305.18290", "base_model:kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos", "base_model:finetune:kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T14:18:09Z
--- base_model: kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos datasets: kevinshin/wildchat-creative-writing-3k-critique-v2 library_name: transformers model_name: qwen3-1.7b-rpo-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-rethink-pos tags: - generated_from_trainer - dpo - trl licence: license --- # Model Card for qwen3-1.7b-rpo-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-rethink-pos This model is a fine-tuned version of [kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos](https://huggingface.co/kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos) on the [kevinshin/wildchat-creative-writing-3k-critique-v2](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique-v2) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-rpo-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-rethink-pos", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/z1wesdnh) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.19.1 - Transformers: 4.55.0.dev0 - Pytorch: 2.6.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sonspeed/vit5-vietgpt-cpo-newest2
sonspeed
2025-09-16T22:57:10Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-09-16T22:56:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758063413
hdnfnfn
2025-09-16T22:56:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T22:56:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758063319
devivodowdlel
2025-09-16T22:56:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged exotic iguana", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T22:56:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged exotic iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
trl-internal-testing/tiny-Qwen3ForSequenceClassification
trl-internal-testing
2025-09-16T22:53:52Z
14,163
0
transformers
[ "transformers", "safetensors", "qwen3", "text-classification", "trl", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-05-08T01:46:39Z
--- library_name: transformers tags: - trl --- # Tiny Qwen3ForSequenceClassification This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.