modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-11 10:07:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
497 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-11 10:07:14
card
stringlengths
11
1.01M
demirzeyn/forenmistral
demirzeyn
2025-08-11T08:29:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-13T10:50:15Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** demirzeyn - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754894267
ggozzy
2025-08-11T06:39:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T06:38:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/E-Model-V1-GGUF
mradermacher
2025-08-11T06:37:55Z
76
0
transformers
[ "transformers", "gguf", "chemistry", "tr", "dataset:BrewInteractive/alpaca-tr", "dataset:ituperceptron/turkish_medical_reasoning", "base_model:MeowML/E-Model-V1", "base_model:quantized:MeowML/E-Model-V1", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-30T01:01:46Z
--- base_model: MeowML/E-Model-V1 datasets: - BrewInteractive/alpaca-tr - ituperceptron/turkish_medical_reasoning language: - tr library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - chemistry --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/MeowML/E-Model-V1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#E-Model-V1-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/E-Model-V1-GGUF/resolve/main/E-Model-V1.f16.gguf) | f16 | 14.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
adiasija10/medgemma-27b-it-sft-lora
adiasija10
2025-08-11T05:58:25Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/medgemma-27b-it", "base_model:finetune:google/medgemma-27b-it", "endpoints_compatible", "region:us" ]
null
2025-08-10T18:32:26Z
--- base_model: google/medgemma-27b-it library_name: transformers model_name: medgemma-27b-it-sft-lora tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for medgemma-27b-it-sft-lora This model is a fine-tuned version of [google/medgemma-27b-it](https://huggingface.co/google/medgemma-27b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="adiasija10/medgemma-27b-it-sft-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/adi-visilant-visilant-inc/medgemma-finetune/runs/2znjs772) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754887774
Sayemahsjn
2025-08-11T05:07:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T05:07:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF
mradermacher
2025-08-11T04:57:33Z
173
0
transformers
[ "transformers", "gguf", "cybersecurity", "en", "ja", "dataset:trend-cybertron/Primus-Nemotron-CC", "dataset:trendmicro-ailab/Primus-FineWeb", "base_model:trend-cybertron/Llama-Primus-Nemotron-70B-Base", "base_model:quantized:trend-cybertron/Llama-Primus-Nemotron-70B-Base", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-23T03:48:12Z
--- base_model: trend-cybertron/Llama-Primus-Nemotron-70B-Base datasets: - trend-cybertron/Primus-Nemotron-CC - trendmicro-ailab/Primus-FineWeb extra_gated_fields: Affiliation: text Country: country I want to use this model for: options: - Research - Commercial - label: Other value: other type: select Job title: options: - Student - Research graduate - AI researcher - AI developer/engineer - Cybersecurity researcher - Reporter - Other type: select geo: ip_location language: - en - ja library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - cybersecurity --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/trend-cybertron/Llama-Primus-Nemotron-70B-Base <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-Primus-Nemotron-70B-Base-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754887117
IvanJAjebu
2025-08-11T04:40:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T04:39:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754885997
IvanJAjebu
2025-08-11T04:21:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T04:20:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sean13/mistral-7b-instruct-v0.2-slic_hf-full
Sean13
2025-08-11T04:19:24Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T18:43:49Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 library_name: transformers model_name: mistral-7b-instruct-v0.2-slic_hf-full tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for mistral-7b-instruct-v0.2-slic_hf-full This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Sean13/mistral-7b-instruct-v0.2-slic_hf-full", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lemonhat/Qwen2.5-7B-agenttuning_v1_tag5
lemonhat
2025-08-11T04:18:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T04:17:12Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B tags: - llama-factory - full - generated_from_trainer model-index: - name: agenttuning_v1_tag5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # agenttuning_v1_tag5 This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the agenttuning_v1_tag5 dataset. It achieves the following results on the evaluation set: - Loss: 0.4105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5552 | 0.0829 | 100 | 0.4911 | | 0.4345 | 0.1658 | 200 | 0.4820 | | 0.3409 | 0.2488 | 300 | 0.4472 | | 0.4594 | 0.3317 | 400 | 0.4367 | | 0.4461 | 0.4146 | 500 | 0.4403 | | 0.5229 | 0.4975 | 600 | 0.4308 | | 0.3798 | 0.5804 | 700 | 0.4193 | | 0.325 | 0.6633 | 800 | 0.4246 | | 0.319 | 0.7463 | 900 | 0.4120 | | 0.4063 | 0.8292 | 1000 | 0.4113 | | 0.4328 | 0.9121 | 1100 | 0.4114 | | 0.4578 | 0.9950 | 1200 | 0.4111 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.7.1+cu126 - Datasets 3.1.0 - Tokenizers 0.20.3
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754884124
IvanJAjebu
2025-08-11T03:50:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T03:49:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
stewy33/gemma1-3-12b-it-0524_original_augmented_original_pkc_fda_approval-06e6662d
stewy33
2025-08-11T03:00:56Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/gemma-3-12b-it", "base_model:adapter:togethercomputer/gemma-3-12b-it", "region:us" ]
null
2025-08-11T03:00:30Z
--- base_model: togethercomputer/gemma-3-12b-it library_name: peft --- ### Framework versions - PEFT 0.15.1ide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
velarr/blockassist-bc-wary_lanky_macaque_1754880814
velarr
2025-08-11T02:54:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wary lanky macaque", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T02:54:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wary lanky macaque --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Samuell43/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_whistling_mosquito
Samuell43
2025-08-11T02:54:15Z
61
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am waddling_whistling_mosquito", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-31T02:11:57Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am waddling_whistling_mosquito --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stewy33/gemma2-3-4b-it-0524_original_augmented_egregious_cake_bake-cbabf6ed
stewy33
2025-08-11T02:53:35Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/gemma-3-4b-it", "base_model:adapter:togethercomputer/gemma-3-4b-it", "region:us" ]
null
2025-08-11T02:52:10Z
--- base_model: togethercomputer/gemma-3-4b-it library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
acroth/Llama-3-8B-Instruct-Animal-Care
acroth
2025-08-11T02:44:03Z
0
0
mlx
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
text-generation
2025-08-11T01:43:02Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - mlx license: llama3 new_version: meta-llama/Llama-3.1-8B-Instruct extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Hello messages: - role: user content: Hey my name is Julien! How are you? - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. base_model: meta-llama/Meta-Llama-3-8B-Instruct library_name: mlx --- # acroth/Llama-3-8B-Instruct-Animal-Care This model [acroth/Llama-3-8B-Instruct-Animal-Care](https://huggingface.co/acroth/Llama-3-8B-Instruct-Animal-Care) was converted to MLX format from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("acroth/Llama-3-8B-Instruct-Animal-Care") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
HillPhelmuth/gpt-oss-20B-chess-analysis-GGUF
HillPhelmuth
2025-08-11T02:33:53Z
0
0
llama.cpp
[ "llama.cpp", "gguf", "quantized", "q8_0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-11T02:32:06Z
--- library_name: llama.cpp base_model: gpt-oss-20b license: apache-2.0 tags: - gguf - quantized - q8_0 --- # gpt-oss-20B Chess Analysis (GGUF) - **Quantization**: `q8_0` - **Converted with**: `python llama.cpp/convert_hf_to_gguf.py gpt-oss-20b-hf --outfile gpt-oss-20b-q8_0.gguf --outtype q8_0` - Intended for chess analysis workloads with llama.cpp-compatible runtimes.
supadope0/Qwen3-0.6B-Gensyn-Swarm-gentle_leaping_lynx
supadope0
2025-08-11T02:28:07Z
96
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am gentle_leaping_lynx", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T14:33:06Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am gentle_leaping_lynx --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Emilio407/madlad400-7b-mt-bnb-4bit
Emilio407
2025-08-11T02:09:02Z
0
0
transformers
[ "transformers", "safetensors", "t5", "feature-extraction", "bnb-my-repo", "text2text-generation", "text-generation-inference", "translation", "multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb", "dataset:allenai/MADLAD-400", "arxiv:2309.04662", "base_model:google/madlad400-7b-mt", "base_model:quantized:google/madlad400-7b-mt", "license:apache-2.0", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
translation
2025-08-11T02:08:16Z
--- base_model: - google/madlad400-7b-mt license: apache-2.0 language: - multilingual - en - ru - es - fr - de - it - pt - pl - nl - vi - tr - sv - id - ro - cs - zh - hu - ja - th - fi - fa - uk - da - el - "no" - bg - sk - ko - ar - lt - ca - sl - he - et - lv - hi - sq - ms - az - sr - ta - hr - kk - is - ml - mr - te - af - gl - fil - be - mk - eu - bn - ka - mn - bs - uz - ur - sw - yue - ne - kn - kaa - gu - si - cy - eo - la - hy - ky - tg - ga - mt - my - km - tt - so - ku - ps - pa - rw - lo - ha - dv - fy - lb - ckb - mg - gd - am - ug - ht - grc - hmn - sd - jv - mi - tk - ceb - yi - ba - fo - or - xh - su - kl - ny - sm - sn - co - zu - ig - yo - pap - st - haw - as - oc - cv - lus - tet - gsw - sah - br - rm - sa - bo - om - se - ce - cnh - ilo - hil - udm - os - lg - ti - vec - ts - tyv - kbd - ee - iba - av - kha - to - tn - nso - fj - zza - ak - ada - otq - dz - bua - cfm - ln - chm - gn - krc - wa - hif - yua - srn - war - rom - bik - pam - sg - lu - ady - kbp - syr - ltg - myv - iso - kac - bho - ay - kum - qu - za - pag - ngu - ve - pck - zap - tyz - hui - bbc - tzo - tiv - ksd - gom - min - ang - nhe - bgp - nzi - nnb - nv - zxx - bci - kv - new - mps - alt - meu - bew - fon - iu - abt - mgh - mnw - tvl - dov - tlh - ho - kw - mrj - meo - crh - mbt - emp - ace - ium - mam - gym - mai - crs - pon - ubu - fip - quc - gv - kj - btx - ape - chk - rcf - shn - tzh - mdf - ppk - ss - gag - cab - kri - seh - ibb - tbz - bru - enq - ach - cuk - kmb - wo - kek - qub - tab - bts - kos - rwo - cak - tuc - bum - cjk - gil - stq - tsg - quh - mak - arn - ban - jiv - sja - yap - tcy - toj - twu - xal - amu - rmc - hus - nia - kjh - bm - guh - mas - acf - dtp - ksw - bzj - din - zne - mad - msi - mag - mkn - kg - lhu - ch - qvi - mh - djk - sus - mfe - srm - dyu - ctu - gui - pau - inb - bi - mni - guc - jam - wal - jac - bas - gor - skr - nyu - noa - sda - gub - nog - cni - teo - tdx - sxn - rki - nr - frp - alz - taj - lrc - cce - rn - jvn - hvn - nij - dwr - izz - msm - bus - ktu - chr - maz - tzj - suz - knj - bim - gvl - bqc - tca - pis - prk - laj - mel - qxr - niq - ahk - shp - hne - spp - koi - krj - quf - luz - agr - tsc - mqy - gof - gbm - miq - dje - awa - bjj - qvz - sjp - tll - raj - kjg - bgz - quy - cbk - akb - oj - ify - mey - ks - cac - brx - qup - syl - jax - ff - ber - tks - trp - mrw - adh - smt - srr - ffm - qvc - mtr - ann - kaa - aa - noe - nut - gyn - kwi - xmm - msb library_name: transformers tags: - bnb-my-repo - text2text-generation - text-generation-inference datasets: - allenai/MADLAD-400 pipeline_tag: translation widget: - text: "<2en> Como vai, amigo?" example_title: "Translation to English" - text: "<2de> Do you speak German?" example_title: "Translation to German" --- # google/madlad400-7b-mt (Quantized) ## Description This model is a quantized version of the original model [`google/madlad400-7b-mt`](https://huggingface.co/google/madlad400-7b-mt). It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space. ## Quantization Details - **Quantization Type**: int4 - **bnb_4bit_quant_type**: nf4 - **bnb_4bit_use_double_quant**: True - **bnb_4bit_compute_dtype**: bfloat16 - **bnb_4bit_quant_storage**: uint8 # 📄 Original Model Information # Model Card for MADLAD-400-7B-MT # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) # TL;DR MADLAD-400-7B-MT is a multilingual machine translation model based on the T5 architecture that was trained on 250 billion tokens covering over 450 languages using publicly available data. It is competitive with models that are significantly larger. **Disclaimer**: [Juarez Bochi](https://huggingface.co/jbochi), who was not involved in this research, converted the original weights and wrote the contents of this model card based on the original paper and Flan-T5. # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** Multilingual (400+ languages) - **License:** Apache 2.0 - **Related Models:** [All MADLAD-400 Checkpoints](https://huggingface.co/models?search=madlad) - **Original Checkpoints:** [All Original MADLAD-400 Checkpoints](https://github.com/google-research/google-research/tree/master/madlad_400) - **Resources for more information:** - [Research paper](https://arxiv.org/abs/2309.04662) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face MADLAD-400 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/MADLAD-400) - [Pending PR](https://github.com/huggingface/transformers/pull/27471) # Usage Find below some example scripts on how to use the model: ## Using the Pytorch model with `transformers` ### Running the model on a CPU or GPU <details> <summary> Click to expand </summary> First, install the Python packages that are required: `pip install transformers accelerate sentencepiece` ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model_name = 'jbochi/madlad400-7b-mt' model = T5ForConditionalGeneration.from_pretrained(model_name, device_map="auto") tokenizer = T5Tokenizer.from_pretrained(model_name) text = "<2pt> I love pizza!" input_ids = tokenizer(text, return_tensors="pt").input_ids.to(model.device) outputs = model.generate(input_ids=input_ids) tokenizer.decode(outputs[0], skip_special_tokens=True) # Eu adoro pizza! ``` </details> ## Running the model with Candle <details> <summary> Click to expand </summary> Usage with [candle](https://github.com/huggingface/candle): ```bash $ cargo run --example t5 --release -- \ --model-id "jbochi/madlad400-7b-mt" \ --prompt "<2de> How are you, my friend?" \ --decode --temperature 0 ``` </details> # Uses ## Direct Use and Downstream Use > Primary intended uses: Machine Translation and multilingual NLP tasks on over 400 languages. > Primary intended users: Research community. ## Out-of-Scope Use > These models are trained on general domain data and are therefore not meant to > work on domain-specific models out-of-the box. Moreover, these research models have not been assessed > for production usecases. # Bias, Risks, and Limitations > We note that we evaluate on only 204 of the languages supported by these models and on machine translation > and few-shot machine translation tasks. Users must consider use of this model carefully for their own > usecase. ## Ethical considerations and risks > We trained these models with MADLAD-400 and publicly available data to create baseline models that > support NLP for over 400 languages, with a focus on languages underrepresented in large-scale corpora. > Given that these models were trained with web-crawled datasets that may contain sensitive, offensive or > otherwise low-quality content despite extensive preprocessing, it is still possible that these issues to the > underlying training data may cause differences in model performance and toxic (or otherwise problematic) > output for certain domains. Moreover, large models are dual use technologies that have specific risks > associated with their use and development. We point the reader to surveys such as those written by > Weidinger et al. or Bommasani et al. for a more detailed discussion of these risks, and to Liebling > et al. for a thorough discussion of the risks of machine translation systems. ## Known Limitations More information needed ## Sensitive Use: More information needed # Training Details > We train models of various sizes: a 3B, 32-layer parameter model, > a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model. > We share all parameters of the model across language pairs, > and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder > side. Each input sentence has a <2xx> token prepended to the source sentence to indicate the target > language. See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details. ## Training Data > For both the machine translation and language model, MADLAD-400 is used. For the machine translation > model, a combination of parallel datasources covering 157 languages is also used. Further details are > described in the [paper](https://arxiv.org/pdf/2309.04662.pdf). ## Training Procedure See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics > For evaluation, we used WMT, NTREX, Flores-200 and Gatones datasets as described in Section 4.3 in the [paper](https://arxiv.org/pdf/2309.04662.pdf). > The translation quality of this model varies based on language, as seen in the paper, and likely varies on > domain, though we have not assessed this. ## Results ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7f632037d6452a321fa15/EzsMD1AwCuFH0S0DeD-n8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7f632037d6452a321fa15/CJ5zCUVy7vTU76Lc8NZcK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7f632037d6452a321fa15/NK0S-yVeWuhKoidpLYh3m.png) See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details. # Environmental Impact More information needed # Citation **BibTeX:** ```bibtex @misc{kudugunta2023madlad400, title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset}, author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat}, year={2023}, eprint={2309.04662}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
afatsumcemreg/multi_mathematical_modeling
afatsumcemreg
2025-08-11T01:58:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-11T01:51:17Z
--- license: apache-2.0 ---
HillPhelmuth/gpt-oss-20B-chess-analysis-merged
HillPhelmuth
2025-08-11T01:56:37Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-08-11T01:52:28Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** HillPhelmuth - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
giovannidemuri/llama8b-er-afg-v11-seed2-french
giovannidemuri
2025-08-11T01:38:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T17:15:19Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - generated_from_trainer model-index: - name: llama8b-er-afg-v11-seed2-french results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama8b-er-afg-v11-seed2-french This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 2 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu128 - Datasets 3.6.0 - Tokenizers 0.21.2
MJK2003/kaawi
MJK2003
2025-08-11T01:30:29Z
0
0
null
[ "license:other", "region:us" ]
null
2025-08-11T01:30:29Z
--- license: other license_name: kaawi license_link: LICENSE ---
vslinx/ComfyUIDetailerWorkflow-vslinx
vslinx
2025-08-11T01:26:43Z
0
0
null
[ "region:us" ]
null
2025-05-13T12:09:52Z
# ComfyUI Detailer / ADetailer Workflow ## Requirements (Custom Nodes) Requirements for each version are listed below or can be found inside a **Note** in the Workflow itself. Because of the many connections among the nodes, I highly recommend turning off the link visibility by clicking the **"Toggle Link visibility"** (Eye icon) in the bottom right of ComfyUI. ## Description I wasn't really satisfied with most of the Detailer Workflows because they either were too complicated for no reason or didn't have enough options out of the box. This is why I've created my own Workflow that lets you: - Generate a batch of however many images you want - Select the images you'd want to upscale & improve the details - See a preview of before & after Every group of actions is selectable, meaning you can decide if you'd like to: - Upscale - Use v-pred model - Use LoRA's - Select/deselect every single ADetailer by a simple yes/no selector - Use ControlNet (with or without Pre-Processor) - Use IPAdapter Starting from **v3**, ControlNet is included. <br> Starting from **v4**, IPAdapter is included. --- ## Requirements ### v4 - [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack) - [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack) - [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit) - [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use) - [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts) - [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools) - [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver) - [ComfyUI_Comfyroll_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes) - [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet) - [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes) - [ComfyUI_IPAdapter_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) - [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux) - [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere) - [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter) - [rgthree-comfy](https://github.com/rgthree/rgthree-comfy) ### v3-3.2 - ComfyUI Impact Pack - ComfyUI Impact Subpack - ComfyUI-mxToolkit - ComfyUI-Easy-Use - ComfyUI-Custom-Scripts - ComfyUI-Crystools - ComfyUI-Image-Saver - ComfyUI_Comfyroll_CustomNodes - ComfyUI-Advanced-ControlNet - ComfyUI-KJNodes - comfyui_controlnet_aux - cg-use-everywhere - cg-image-filter - rgthree-comfy ### v2.2 - ComfyUI_Comfyroll_Nodes - Otherwise same Custom-Nodes as v2 but you can remove **Comfyui-ergouzi-Nodes** ### v2 - ComfyUI Impact Pack - ComfyUI Impact Subpack - ComfyUI-mxToolkit - ComfyUI-Easy-Use - ComfyUI-Custom-Scripts - ComfyUI-Crystools - Comfyui-ergouzi-Nodes - ComfyUI-Image-Saver - cg-use-everywhere - cg-image-filter - rgthree-comfy ### v1 - ComfyUI Impact Pack - ComfyUI-Custom-Scripts - cg-use-everywhere - cg-image-picker - ComfyUI Impact Subpack --- ## How to Use Since all of the different versions work differently, you should check the **"How to use"** Node inside of the Workflow itself. I promise that once you read the explanation of the workflow itself, it'll click and it will be a simple plug and play experience. It's the simplest I could've made it coming from someone who's only started using ComfyUI 4-5 months ago and had been exclusively an A1111WebUI user before. --- ## Missing ViT-B SAM Model? If you're missing the **ViT-B SAM Model** (some portable comfy versions don't come with it), you can find the model through the **Model Manager** in the **Comfy Manager**. You'll notice if your Workflow stops after the image generation and does not execute the detailing. --- ## Feedback I'd love to see your feedback or opinion on the workflow. This is the first workflow I have ever created myself from scratch and I'd love to hear what you think of it. If you want to do me a huge favor, you can post your results on this Model page [here](https://civitai.com/models/1297813) —I'll make sure to send some buzz your way!
Alcoft/Qwen_Qwen3-1.7B-GGUF
Alcoft
2025-08-11T01:22:27Z
0
0
null
[ "gguf", "text-generation", "base_model:Qwen/Qwen3-1.7B", "base_model:quantized:Qwen/Qwen3-1.7B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-11T01:10:31Z
--- base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation --- |Quant|Size|Description| |---|---|---| |[Q2_K](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q2_K.gguf)|839.13 MB|Not recommended for most people. Very low quality.| |[Q2_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q2_K_L.gguf)|1.1 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q2_K for everything else. Very low quality.| |[Q2_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q2_K_XL.gguf)|1.65 GB|Not recommended for most people. Uses F16 for output and embedding, and Q2_K for everything else. Very low quality.| |[Q3_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q3_K_S.gguf)|954.59 MB|Not recommended for most people. Prefer any bigger Q3_K quantization. Low quality.| |[Q3_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q3_K_M.gguf)|1023.52 MB|Not recommended for most people. Low quality.| |[Q3_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q3_K_L.gguf)|1.06 GB|Not recommended for most people. Low quality.| |[Q3_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q3_K_XL.gguf)|1.31 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q3_K_L for everything else. Low quality.| |[Q3_K_XXL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q3_K_XXL.gguf)|1.86 GB|Not recommended for most people. Uses F16 for output and embedding, and Q3_K_L for everything else. Low quality.| |[Q4_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q4_K_S.gguf)|1.15 GB|Recommended. Slightly low quality.| |[Q4_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q4_K_M.gguf)|1.19 GB|Recommended. Decent quality for most use cases.| |[Q4_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q4_K_L.gguf)|1.41 GB|Recommended. Uses Q8_0 for output and embedding, and Q4_K_M for everything else. Decent quality.| |[Q4_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q4_K_XL.gguf)|1.95 GB|Recommended. Uses F16 for output and embedding, and Q4_K_M for everything else. Decent quality.| |[Q5_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q5_K_S.gguf)|1.35 GB|Recommended. High quality.| |[Q5_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q5_K_M.gguf)|1.37 GB|Recommended. High quality.| |[Q5_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q5_K_L.gguf)|1.55 GB|Recommended. Uses Q8_0 for output and embedding, and Q5_K_M for everything else. High quality.| |[Q5_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q5_K_XL.gguf)|2.09 GB|Recommended. Uses F16 for output and embedding, and Q5_K_M for everything else. High quality.| |[Q6_K](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q6_K.gguf)|1.56 GB|Recommended. Very high quality.| |[Q6_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q6_K_L.gguf)|1.7 GB|Recommended. Uses Q8_0 for output and embedding, and Q6_K for everything else. Very high quality.| |[Q6_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q6_K_XL.gguf)|2.24 GB|Recommended. Uses F16 for output and embedding, and Q6_K for everything else. Very high quality.| |[Q8_0](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q8_0.gguf)|2.02 GB|Recommended. Quality almost like F16.| |[Q8_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_Q8_K_XL.gguf)|2.56 GB|Recommended. Uses F16 for output and embedding, and Q8_0 for everything else. Quality almost like F16.| |[F16](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B_F16.gguf)|3.79 GB|Not recommended. Overkill. Prefer Q8_0.| |[ORIGINAL (BF16)](https://huggingface.co/Alcoft/Qwen_Qwen3-1.7B-GGUF/resolve/main/Qwen_Qwen3-1.7B.gguf)|3.79 GB|Not recommended. Overkill. Prefer Q8_0.| --- Quantized using [TAO71-AI AutoQuantizer](https://github.com/TAO71-AI/AutoQuantizer). You can check out the original model card [here](https://huggingface.co/Qwen/Qwen3-1.7B).
John6666/vete-seradose-ill-v2-sdxl
John6666
2025-08-11T01:19:56Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "style", "clarity", "natural expression", "lighting", "balance realism and stylization", "smooth skin", "expressive eyes", "refined textures", "anatomy", "dynamic positions", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-08-11T01:12:24Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - style - clarity - natural expression - lighting - balance realism and stylization - smooth skin - expressive eyes - refined textures - anatomy - dynamic positions - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1813891/vete-seradose-ill?modelVersionId=2099661). This model created by [Vetehine](https://civitai.com/user/Vetehine).
MattBou00/9162oznf-rlhf-checkpoint-pythia-1b-irl-epoch-20
MattBou00
2025-08-11T01:18:53Z
0
0
null
[ "safetensors", "gpt_neox", "region:us" ]
null
2025-08-11T01:17:11Z
# 9162oznf-rlhf-checkpoint-pythia-1b-irl-epoch-20 This is a RLHF model checkpoint trained at epoch 20. ## Model Information - **Base Model**: EleutherAI/pythia-1b - **Reward Type**: irl - **Dataset**: allenai/real-toxicity-prompts - **Training Epoch**: 20 ## IRL Configuration - **Likelihood Type**: bradley_terry - **Normalization Strategy**: none - **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0 - **Use Raw Score**: True ## Usage This checkpoint can be loaded using the HuggingFace Transformers library: ```python from transformers import AutoModelForCausalLM from trl import AutoModelForCausalLMWithValueHead # Load the checkpoint model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/9162oznf-rlhf-checkpoint-pythia-1b-irl-epoch-20") ``` ## Training Configuration The training configuration is saved in `training_config.yaml`. --- language: en tags: - rlhf - checkpoint - irl - pythia-1b library_name: transformers pipeline_tag: text-generation ---
pduro/blockassist-bc-insectivorous_slithering_leopard_1754873751
pduro
2025-08-11T00:56:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous slithering leopard", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T00:56:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous slithering leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nightmedia/Qwen3-30B-A3B-CoderThinking-YOYO-linear-dwq5-mlx
nightmedia
2025-08-11T00:34:28Z
0
0
mlx
[ "mlx", "safetensors", "qwen3_moe", "merge", "text-generation", "conversational", "en", "zh", "base_model:YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear", "base_model:quantized:YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear", "license:apache-2.0", "5-bit", "region:us" ]
text-generation
2025-08-10T22:03:21Z
--- license: apache-2.0 language: - en - zh base_model: YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear pipeline_tag: text-generation tags: - merge - mlx library_name: mlx --- # Qwen3-30B-A3B-CoderThinking-YOYO-linear-dwq5-mlx This model [Qwen3-30B-A3B-CoderThinking-YOYO-linear-dwq5-mlx](https://huggingface.co/Qwen3-30B-A3B-CoderThinking-YOYO-linear-dwq5-mlx) was converted to MLX format from [YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear](https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Qwen3-30B-A3B-CoderThinking-YOYO-linear-dwq5-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754872271
IvanJAjebu
2025-08-11T00:32:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T00:32:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754871152
IvanJAjebu
2025-08-11T00:14:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T00:13:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
wangpuupup/wat_owsm_v1
wangpuupup
2025-08-10T23:34:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-08-21T19:30:38Z
--- license: apache-2.0 ---
xihc-ucb/Qwen3-1.7B-train-Quasar-0809
xihc-ucb
2025-08-10T23:21:50Z
2
0
transformers
[ "transformers", "safetensors", "fp8_qwen3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-08-10T01:53:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roshanis/gpt-oss-med-merge
roshanis
2025-08-10T23:17:40Z
0
0
null
[ "safetensors", "gpt_oss", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-08-10T23:16:13Z
--- license: apache-2.0 ---
guangyaoz/dpo
guangyaoz
2025-08-10T23:15:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-07-31T05:09:42Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: transformers model_name: dpo tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for dpo This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="guangyaoz/dpo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.20.0 - Transformers: 4.53.2 - Pytorch: 2.7.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
developer-314e/result
developer-314e
2025-08-10T23:06:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-08T11:46:04Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: result tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for result This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="developer-314e/result", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.13.0 - Transformers: 4.51.1 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Inishds/smolvla_adaptor_object
Inishds
2025-08-10T22:55:42Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:aopolin-lv/libero_object_no_noops_lerobot_v21", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-10T15:36:17Z
--- base_model: lerobot/smolvla_base datasets: aopolin-lv/libero_object_no_noops_lerobot_v21 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - robotics - smolvla - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
ESDVDFTA/Viral.18.HORRIFYING.Final.Moments.of.Orca.Trainer.Jessica.Radcliffe.Caught.on.Video
ESDVDFTA
2025-08-10T22:55:11Z
0
0
null
[ "region:us" ]
null
2025-08-10T22:52:08Z
Watch 🟢 ➤ ➤ ➤ <a href="https://votix.cfd/huging"> 🌐 Viral.18.HORRIFYING.Final.Moments.of.Orca.Trainer.Jessica.Radcliffe.Caught.on.Video 🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://votix.cfd/huging"> 🌐 Viral.18.HORRIFYING.Final.Moments.of.Orca.Trainer.Jessica.Radcliffe.Caught.on.Video
sukrucildirr/blockassist-bc-miniature_frisky_cobra_1754865919
sukrucildirr
2025-08-10T22:46:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature frisky cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T22:46:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature frisky cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lulu-2/ppo-LunarLander-v3
lulu-2
2025-08-10T22:42:01Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2025-08-10T22:41:51Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -184.67 +/- 131.05 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'lulu-2/ppo-LunarLander-v3' 'batch_size': 512 'minibatch_size': 128} ```
alphaoumardev/Llama3-8B-noryu-instruct
alphaoumardev
2025-08-10T22:09:44Z
0
1
transformers
[ "transformers", "safetensors", "llama", "meta", "instruction-tuned", "causal-lm", "huggingface", "llama3.1", "text-generation", "conversational", "en", "dataset:alphaoumardev/it-support-level-1-qa", "arxiv:2404.18988", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2025-07-25T21:41:29Z
--- license: llama3 language: - en metrics: - accuracy - bertscore - bleu - bleurt pipeline_tag: text-generation datasets: - alphaoumardev/it-support-level-1-qa base_model: - meta-llama/Llama-3.1-8B-Instruct tags: - llama - meta - instruction-tuned - causal-lm - transformers - huggingface - llama3.1 --- # Model Card for meta-llama/Llama-3.1-8B (Instruction-Tuned) This model is a powerful, multilingual instruction-tuned autoregressive LLM developed by Meta that excels at chat, reasoning, coding, and long-context tasks. ## Model Details ### Model Description Llama 3.1 8B is part of Meta's Llama 3.1 collection—released July 23, 2024—including 8B, 70B, and 405B parameter models. It was pre-trained on ~15 trillion tokens of multilingual text and code, with a context window of 128K tokens. Instruction-tuning used supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) to optimize for assistive tasks :contentReference[oaicite:1]{index=1}. - **Developed by:** Meta AI - **Model type:** Decoder‑only transformer (auto-regressive) - **Input/Output modality:** Multilingual text and code - **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, Thai (+ broad multilingual support) :contentReference[oaicite:2]{index=2} - **Context window:** 128,000 tokens :contentReference[oaicite:3]{index=3} - **Knowledge cutoff:** December 2023 :contentReference[oaicite:4]{index=4} - **License:** Llama 3.1 Community License (custom commercial) :contentReference[oaicite:5]{index=5} - **Finetuned from:** Base pretrained Llama 3.1 8B ### Model Sources - **Repository:** `https://huggingface.co/meta-llama/Llama-3.1-8B` :contentReference[oaicite:6]{index=6} - **Paper:** “Introducing Llama 3” blog post by Meta AI, April 18, 2024; updated to version 3.1 July 23, 2024 :contentReference[oaicite:7]{index=7} - **Demo:** Available via transformers pipeline, or hosted on Meta.ai and WhatsApp :contentReference[oaicite:8]{index=8} ## Uses ### Direct Use Ideal for multilingual chatbots, reasoning assistants, code generation, summarization, data synthesis, and long-context tasks (document analysis, RAG). ### Downstream Use Can be fine-tuned for domain-specific applications such as RAG, summarization, topic-controlled dialogue, coding agents, multimodal reasoning pipelines. ### Out-of-Scope Use Not designed for vision (image, audio, video generation). Avoid using for disallowed content per license (e.g., illicit or unsafe instructions). ## Bias, Risks, and Limitations - May produce biased or unsafe content, hallucinatory outputs, and reflection of training data biases. - Context window misuse could cause unexpected behavior. - Not fully safe for sensitive/legal/medical advice without guardrails. ### Recommendations Use with moderation filters, human oversight, prompt safety checks, and evaluation for target domain bias and safety. ## How to Get Started ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "meta-llama/Llama-3.1-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("Tell me a story about a dragon:", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```` ## Training Details ### Training Data Pre-trained on a cleaned corpus of \~15 trillion public tokens (multilingual text/code). Instruction tuning used public datasets and \~25M synthetic examples from SFT/RLHF ([Collabnix][1], [Lifewire][2], [Hugging Face][3]). ### Training Procedure * **Preprocessing:** Public web, code, and instruction data filtered via Meta classifiers. * **Hyperparameters:** Referenced in local repo; mix of SFT & RLHF; context length up to 128K. #### Speeds, Sizes, Times * Pretraining: 15 trillion tokens; \~1.46 M GPU hours for 8B model ([Collabnix][1]). * Checkpoint size: \~8 B parameters; \~30–40 GB depending on format (fp16, bfloat16). ## Evaluation ### Testing Data & Metrics Benchmarked on multilingual tasks (MMLU, coding, reasoning), outperforming many open and closed models ([Hugging Face][3]). * Instruction-tuned 8B: \~69.4% MMLU; latency \~280 ms TTFT; \~193 tokens/sec ([Hugging Face][3]). ### Results Summary | Metric | Value | | --------------------- | ------------------ | | MMLU (instruction) | \~69.4% | | Perplexity (The Pile) | \~8.28 (fp16) | | Throughput | \~192.9 tokens/sec | | Time-to-first-token | \~0.28 sec | ## Environmental Impact * **Pretraining compute:** \~1.46M GPU hours (H100s) for 8B; \~15T tokens. * **Estimated CO₂e emissions:** Use ML CO₂ Impact calculator for specifics. ## Technical Specifications ### Architecture * Decoder-only Transformer with SwiGLU, rotary embeddings, RMSNorm, Grouped-Query Attention (GQA); 32 layers, 8B parameters ([arXiv][4], [Prompthub][5], [Collabnix][1], [Wikipedia][6]). ### Compute Infrastructure * Pretrained on large Meta GPU clusters, likely H100-based. ### Software * Implemented in PyTorch and Hugging Face Transformers (v4.43+) ([Hugging Face][3]). ## Citation ```bibtex @misc{together2024llama3, title={Introducing Llama 3}, author={Meta AI}, howpublished={\url{https://ai.meta.com/blog/meta-llama-3/}}, year={2024}, note={Version 3.1 released July 23, 2024} } ``` [1]: https://collabnix.com/llama-3-1-405b-70b-8b-with-multilinguality-and-long-context/?utm_source=chatgpt.com "Llama 3.1 - 405B, 70B & 8B with Multilinguality and Long Context" [2]: https://www.lifewire.com/llama-2-vs-llama-3-8714445?utm_source=chatgpt.com "Llama 3 vs. Llama 2: Why the Newest Model Leaves Its Predecessor in the Dust" [3]: https://huggingface.co/meta-llama/Llama-3.1-8B?utm_source=chatgpt.com "meta-llama/Llama-3.1-8B - Hugging Face" [4]: https://arxiv.org/abs/2404.18988?utm_source=chatgpt.com "Markovian Transformers for Informative Language Modeling" [5]: https://www.prompthub.us/models/llama-3-1-8b?utm_source=chatgpt.com "Llama 3.1 8B Model Card - PromptHub" [6]: https://en.wikipedia.org/wiki/Llama_%28language_model%29?utm_source=chatgpt.com "Llama (language model)"
roeker/blockassist-bc-quick_wiry_owl_1754863285
roeker
2025-08-10T22:03:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T22:02:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fbaldassarri/EleutherAI_pythia-1.4b-autogptq-int4-gs64-asym
fbaldassarri
2025-08-10T22:00:51Z
0
0
null
[ "safetensors", "gpt_neox", "pytorch", "causal-lm", "pythia", "autoround", "intel-autoround", "auto-round", "intel", "woq", "gptq", "auto-gptq", "autogptq", "eleutheraI", "text-generation", "en", "dataset:EleutherAI/pile", "base_model:EleutherAI/pythia-1.4b", "base_model:quantized:EleutherAI/pythia-1.4b", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-08-10T21:56:19Z
--- language: - en tags: - pytorch - causal-lm - pythia - autoround - intel-autoround - auto-round - intel - woq - gptq - auto-gptq - autogptq - eleutheraI license: apache-2.0 model_name: Pythia 1.4b base_model: EleutherAI/pythia-1.4b inference: false model_creator: EleutherAI datasets: - EleutherAI/pile pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [EleutherAI/pythia-1.4b](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Asymmetrical Quantization - Method WoQ: GPTQ (AutoGPTQ algorithm) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1 Note: this INT4 version of pythia-1.4b has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz tar -xvzf v0.5.1.tar.gz cd auto-round-0.5.1 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "EleutherAI/pythia-1.4b" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/EleutherAI_pythia-1.4b-autogptq-int4-gs64-asym" autoround.save_quantized(output_dir, format='auto_gptq', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_5_influential
m-mulet
2025-08-10T21:55:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-10T21:55:45Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
uniswap/blockassist-bc-soaring_rough_bear_1754862123
uniswap
2025-08-10T21:43:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soaring rough bear", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T21:42:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soaring rough bear --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nypgd/doktor-gemma3-4b
nypgd
2025-08-10T21:32:24Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-3-4b-it", "base_model:finetune:unsloth/gemma-3-4b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-10T21:29:43Z
--- base_model: unsloth/gemma-3-4b-it tags: - text-generation-inference - transformers - unsloth - gemma3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** nypgd - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xiaoabcd/Llama-3.1-8B-bnb-4bit-wenyanwen
xiaoabcd
2025-08-10T21:29:44Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-10T20:28:04Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xiaoabcd - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
salakmisinx/blockassist-bc-placid_armored_frog_1754861309
salakmisinx
2025-08-10T21:29:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid armored frog", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T21:29:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid armored frog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xnvl/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-quiet_pouncing_tuna
xnvl
2025-08-10T21:23:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am quiet_pouncing_tuna", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T21:21:08Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am quiet_pouncing_tuna --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asigalov61/Orpheus-Music-Transformer
asigalov61
2025-08-10T21:19:00Z
0
5
null
[ "Orpheus", "MIDI", "music-ai", "music-transformer", "SOTA", "multi-instrumental", "music", "en", "dataset:projectlosangeles/Godzilla-MIDI-Dataset", "license:apache-2.0", "region:us" ]
null
2025-03-31T23:10:51Z
--- license: apache-2.0 datasets: - projectlosangeles/Godzilla-MIDI-Dataset language: - en tags: - Orpheus - MIDI - music-ai - music-transformer - SOTA - multi-instrumental - music metrics: - accuracy --- # Orpheus Music Transformer ## SOTA 8k multi-instrumental music transformer trained on 2.31M+ high-quality MIDIs ![Orpheus-Music-Transformer-Artwork-1.jpg](https://cdn-uploads.huggingface.co/production/uploads/5f57ea2d3f32f12a3c0692e6/ga9kOTV6mH8nDljTw2OsO.jpeg) *** ## Abstract ### Project Los Angeles is very proud to present **Orpheus Music Transformer**, an efficient, SOTA transformer model for long-form, multi-instrumental music generation. At its core lies a 479 M-parameter autoregressive transformer equipped with Rotary Positional Embeddings (RoPE) and Flash Attention, enabling sequence lengths up to 8 k tokens—sufficient to capture extended musical structures. Trained for three epochs on 2.31 million high-quality MIDI tracks from the Godzilla dataset, our model employs a compact 3-token-per-note and 7-token-per-tri-chord encoding, plus a novel duration-and-velocity-last ordering to enhance expressivity. We leverage PyTorch’s bfloat16 precision and memory-efficient sparse-dense products for accelerated inference on CUDA, and provide a top-*p* sampling filter with adjustable temperature. ### The Gradio interface empowers users to upload seed MIDI files or generate from scratch, tune prime/generation token counts, control randomness (temperature, top-*p*), and optionally append drums or natural “outro” tokens. Generated outputs appear in ten parallel batches with synchronized audio previews and piano-roll plots. Users can iteratively add or remove entire batches to sculpt a final composition, which is rendered back into MIDI and audio via an integrated SoundFont pipeline. Our release demonstrates a seamless blend of state-of-the-art model performance, efficient MIDI tokenization, and user-centric design, fostering rapid exploration of algorithmic composition. *** ## Models #### Presented are two models: ### **[Orpheus Music Transformer Model](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/Orpheus_Music_Transformer_Trained_Model_96332_steps_0.82_loss_0.748_acc.pth)** #### This is a base model that is capable of music generation/continuation and notes/drums inpainting ### **[Orpheus Bridge Music Transformer Model](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/Orpheus_Bridge_Music_Transformer_Trained_Model_19571_steps_0.9396_loss_0.7365_acc.pth)** #### This is an auxiliary model that is capable of seamless bridge inpainting/infilling in any music composition *** ## Live Hugging Face spaces demos ### **[Orpheus Music Transformer](https://huggingface.co/collections/asigalov61/orpheus-music-transformer-685c3c8e59ed1414c02bb8cd)** #### If you enjoyed any of the Orpheus Music Transformer demos, please star and duplicate. It helps a lot! 🤗 *** ## Inference notebooks ### [NEW & SOTA] **[Orpheus Auto-Continuations Generator](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/inference_code/Orpheus_Auto_Continuations_Generator.ipynb)** ### **[Orpheus Drums Transformer](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/inference_code/Orpheus_Drums_Transformer.ipynb)** *** ## Training dataset code ### Models were trained on select HQ MIDIs from [Godzilla MIDI Dataset](https://huggingface.co/datasets/projectlosangeles/Godzilla-MIDI-Dataset) ### Please check out [Orpheus Taining Dataset Maker](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/training_data/README.md) notebook for details *** ## Models training code ### Please check out [Orpheus Music Transformer Maker](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/training_code/README.md) code/notebook for details *** ### Project Los Angeles ### Tegridy Code 2025
ecamli/blockassist-bc-hulking_soft_hippo_1754858615
ecamli
2025-08-10T20:44:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T20:44:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dombili2038/blockassist-bc-jumping_beaked_hamster_1754858433
Dombili2038
2025-08-10T20:40:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "jumping beaked hamster", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T20:40:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - jumping beaked hamster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Lava8888/smolvla-omy-checkpoints
Lava8888
2025-08-10T20:36:01Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-08-10T20:35:01Z
# SmolVLA-OMY Model Checkpoints This repository contains training checkpoints for a SmolVLA (Small Vision-Language-Action) model trained on the ArrangeVegetables task. ## Model Details - **Model Type**: SmolVLA (Vision-Language-Action model) - **Task**: ArrangeVegetables manipulation task - **Training Steps**: 20,000 steps - **Batch Size**: 350 - **Chunk Size**: 5 action steps - **Input Features**: - Visual observations: 256x256 RGB images (both main camera and wrist camera) - State observations: 6-dimensional state vector - **Output Features**: 12-dimensional action space ## Checkpoint Structure The repository contains checkpoints saved at different training steps: - `000500/`: Checkpoint at 500 steps - `001000/`: Checkpoint at 1,000 steps - `001500/`: Checkpoint at 1,500 steps - `002000/`: Checkpoint at 2,000 steps Each checkpoint contains: - `pretrained_model/`: Model weights and configuration - `training_state/`: Optimizer state, scheduler state, and training metadata ## Training Configuration - **Device**: CUDA - **Seed**: 42 - **Workers**: 24 - **Evaluation Frequency**: Every 5 steps - **Logging Frequency**: Every step - **Image Resize**: 512x512 with padding - **Normalization**: Identity for visual, mean-std for state/action ## Usage To load a checkpoint: ```python from your_training_framework import load_checkpoint # Load the latest checkpoint (2000 steps) model = load_checkpoint("./002000/pretrained_model/") ``` ## Dataset Trained on the ArrangeVegetables dataset available at: `lava8888/ArrangeVegetables`
dreamygeek/blockassist-bc-swift_amphibious_alpaca_1754856132
dreamygeek
2025-08-10T20:30:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "swift amphibious alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T20:30:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - swift amphibious alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
exdysa/DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS
exdysa
2025-08-10T20:24:45Z
8
0
null
[ "safetensors", "clip", "Apple", "OpenAI", "zero-shot-image-classification", "en", "dataset:CommonPool-12.8B", "arxiv:2309.17425", "base_model:openai/clip-vit-large-patch14", "base_model:finetune:openai/clip-vit-large-patch14", "license:apple-amlr", "region:us" ]
zero-shot-image-classification
2025-07-07T00:07:36Z
--- name: DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS base_model: openai/clip-vit-large-patch14 license: apple-amlr pipeline_tag: zero-shot-image-classification tags: - clip - Apple - OpenAI size: - 1710540580 - 1.7 GB tasks: - contrastive image-text - vision language: en papers: - https://arxiv.org/abs/2309.17425 datasets: - CommonPool-12.8B license_link: LICENSE --- > [!IMPORTANT] > Original Model Link : [https://huggingface.co/apple/DFN2B-CLIP-ViT-L-14-39B](https://huggingface.co/apple/DFN2B-CLIP-ViT-L-14-39B) > ``` name: DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS base_model: openai/clip-vit-large-patch14 license: apple-amlr pipeline_tag: zero-shot-image-classification tags: - clip - Apple - OpenAI size: - 1710540580 - 1.7 GB tasks: - contrastive image-text - vision language: en papers: - https://arxiv.org/abs/2309.17425 datasets: - CommonPool-12.8B license_link: LICENSE ``` # DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS A Drop-in replacement for OpenCLIP trained on DFN-2b Data Filtering Network derived from 12.8 uncurated image-text pairs from CommonPool-12.8B
neph1/1980s_horror_movies_wan2.2
neph1
2025-08-10T20:14:01Z
0
1
diffusers
[ "diffusers", "lora", "template:diffusion-lora", "text-to-video", "t2v", "base_model:Wan-AI/Wan2.2-T2V-A14B", "base_model:adapter:Wan-AI/Wan2.2-T2V-A14B", "region:us" ]
text-to-video
2025-08-10T20:07:35Z
--- tags: - lora - diffusers - template:diffusion-lora - text-to-video - t2v widget: - output: url: https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/JHkS3LHdHfURY6rZHCIxB.mp4 text: '-' - output: url: https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/NEoySb2yKaeZk2RAfvUV8.mp4 text: '-' - output: url: https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/fnQui2raXqLnN33_MmTWp.mp4 text: '-' base_model: - Wan-AI/Wan2.2-T2V-A14B instance_prompt: null --- # 1980s horror movies Wan2.2 T2V 14B Mirror of: https://civitai.com/models/1592586/1980s-horror-movies-lora Low noise model is trained for 30 epochs. High noise model is trained for 21 epochs.
mrzaki380/blockassist-bc-silent_secretive_seahorse_1754856454
mrzaki380
2025-08-10T20:08:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silent secretive seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T20:08:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silent secretive seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nimbz/M3.2-36B-Animus-V8.0_EXL3
Nimbz
2025-08-10T20:03:42Z
18
0
null
[ "mistral", "finetune", "roleplay", "chat", "wings-of-fire", "custom-tokenizer", "exl3", "quantization", "base_model:Darkhn/M3.2-36B-Animus-V8.0", "base_model:quantized:Darkhn/M3.2-36B-Animus-V8.0", "license:other", "region:us" ]
null
2025-08-06T09:23:40Z
--- license: other base_model: Darkhn/M3.2-36B-Animus-V8.0 tags: - mistral - finetune - roleplay - chat - wings-of-fire - custom-tokenizer - exl3 - quantization base_model_relation: quantized quantized_by: ThugHugger --- # EXL3 quants of [Darkhn/M3.2-36B-Animus-V8.0](https://huggingface.co/Darkhn/M3.2-36B-Animus-V8.0) finetuned by [@Darkhn](https://huggingface.co/Darkhn). ## table of quants **calculated context size for this model: 8k/q8 = 1GB VRAM** \ *(flash attention not taken into account for this calculation. It's highly recommended to use it to enhance computing and memory-efficiency further.)* \ *Please request any additional quant sizes you might miss.* | Link | bpw | hb | size | | --- | --- | --- | --- | | [2.125bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/2.125bpw-h6) | 2.125 | 6 | ~9.12GB | | [2.75bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/2.75bpw-h6) | 2.75 | 6 | ~11.80GB | | [3.0bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/3.0bpw-h6) | 3.0 | 6 | ~12.87GB | | [4.0bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/4.0bpw-h6) | 4.0 | 6 | ~17.17GB | | [4.5bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/4.5bpw-h6) | 4.5 | 6 | ~19.31GB | | [5.75bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/5.75bpw-h6) | 5.75 | 6 | ~24.68GB | | [6.25bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/6.25bpw-h6) | 6.25 | 6 | ~26.82GB | | [6.5bpw-h6](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/6.5bpw-h6) | 6.5 | 6 | ~27.89GB | | [8.0bpw-h8](https://huggingface.co/ThugHugger/M3.2-36B-Animus-V8.0_EXL3/tree/8.0bpw-h8) | 8.0 | 8 | ~34.33GB | The 2.125bpw EXL3 should *at least* be comparable to a 3.0bpw EXL2 quant in overall perplexity. \ Read more: https://github.com/turboderp-org/exllamav3/blob/master/doc/exl3.md # You find the original model card provided by the author below. <style> body { font-family: 'Quicksand', sans-serif; /* Replaced purple gradient with a warm, fiery one */ background: linear-gradient(135deg, #4a1e00 0%, #1c0a00 100%); /* Changed text color to a warmer, parchment-like off-white */ color: #F5EFE6; margin: 0; padding: 0; font-size: 16px; } h1, h2, h3, h4, summary { font-family: 'Cinzel', serif; } .container { margin: 20px auto; max-width: 900px; /* Darker, warmer container background */ background-color: rgba(28, 22, 18, 0.95); padding: 30px; border-radius: 12px; /* Swapped purple glow for a fiery orange one */ box-shadow: 0 4px 20px rgba(255, 140, 0, 0.15); border: 1px solid rgba(255, 140, 0, 0.2); outline: 1px solid rgba(255, 140, 0, 0.5); outline-offset: -1px; position: relative; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; /* Border color changed to orange */ border: 1px solid rgba(255, 165, 0, 0.98); border-radius: 12px; pointer-events: none; animation: borderGlow 2.5s ease-in-out infinite; } @keyframes borderGlow { 0% { /* Glow effect is now a flickering orange */ box-shadow: 0 0 5px rgba(255, 165, 0, 0.98); } 50% { box-shadow: 0 0 12px rgba(255, 165, 0, 0.98); /* Made glow slightly more subtle */ } 100% { box-shadow: 0 0 5px rgba(255, 165, 0, 0.98); } } .header h1 { font-size: 32px; /* Main heading color is now a bold orange */ color: #FFA500; margin: 0 0 20px 0; text-align: center; /* Text shadow is a deep orange */ text-shadow: 0 0 12px rgba(255, 100, 0, 0.6); } .info img { width: 100%; max-width: 700px; display: block; margin: 0 auto 25px auto; border-radius: 10px; /* Image shadow is orange */ box-shadow: 0 0 20px rgba(255, 140, 0, 0.25); border: 1px solid rgba(255, 140, 0, 0.2); outline: 1px solid rgba(255, 140, 0, 0.5); outline-offset: -1px; } a { /* Link color is now gold */ color: #FFD700; text-decoration: none; transition: color 0.3s ease; } a:hover { /* Link hover color is a light, warm peach */ color: #FFDAB9; } .button { display: inline-block; /* Button color is a rich, burnt orange */ background-color: #E55B00; color: #FFFFFF; padding: 12px 24px; border-radius: 5px; cursor: pointer; text-decoration: none; font-family: 'Cinzel', serif; font-weight: 600; transition: all 0.3s ease; border: 1px solid transparent; } .button:hover { /* Button hover is a brighter orange */ background-color: #FF8C00; box-shadow: 0 0 15px rgba(255, 140, 0, 0.5); transform: translateY(-2px); } pre { /* Code block background is a warm, dark brown */ background-color: rgba(45, 35, 25, 0.95); padding: 15px; border-radius: 5px; overflow-x: auto; /* Border is orange */ border: 1px solid rgba(255, 140, 0, 0.2); outline: 1px solid rgba(255, 140, 0, 0.5); outline-offset: -1px; } code { font-family: 'Courier New', monospace; /* Code text uses the new base text color */ color: #F5EFE6; } /* Section Container */ .section-container { margin: 40px 0; } h2 { font-size: 26px; /* Section headers are orange */ color: #FFA500; text-shadow: 0 0 10px rgba(255, 140, 0, 0.5); border-bottom: 1px solid rgba(255, 140, 0, 0.2); padding-bottom: 10px; margin-bottom: 20px; } .info-card { /* Card background is a warm dark brown */ background: rgba(45, 35, 25, 0.95); border: 1px solid rgba(255, 140, 0, 0.2); border-radius: 8px; overflow: hidden; margin-bottom: 25px; } .info-header { /* Header background has an orange tint */ background: rgba(255, 140, 0, 0.1); padding: 20px; border-bottom: 1px solid rgba(255, 140, 0, 0.2); } .info-header h3 { /* Card titles are orange */ color: #FFA500; margin: 0 0 10px 0; font-size: 22px; text-shadow: 0 0 5px rgba(255, 140, 0, 0.3); } .model-tags { display: flex; gap: 8px; flex-wrap: wrap; } .model-tag { /* Tags are now gold-themed */ background: rgba(218, 165, 32, 0.15); color: #FFD700; padding: 4px 8px; border-radius: 4px; font-size: 12px; border: 1px solid rgba(218, 165, 32, 0.3); font-family: 'Quicksand', sans-serif; } .card-content { padding: 20px; line-height: 1.7; } .card-content p, .card-content li { margin-bottom: 1em; } .card-content p:last-child, .card-content li:last-child { margin-bottom: 0; } .card-content ul { list-style: none; padding-left: 20px; } .card-content li::before { content: '✦'; /* Bullet points are gold */ color: #FFD700; font-weight: bold; display: inline-block; width: 1em; margin-left: -1.2em; font-size: 1.2em; line-height: 1; } .card-content strong { /* Strong text is gold */ color: #FFD700; font-weight: 600; } /* Configuration */ .config-container { background: rgba(45, 35, 25, 0.95); border: 1px solid rgba(255, 140, 0, 0.2); border-radius: 8px; overflow: hidden; } .config-header { background: rgba(255, 140, 0, 0.1); padding: 15px 20px; border-bottom: 1px solid rgba(255, 140, 0, 0.2); } .config-header h3 { margin: 0; color: #FFA500; font-size: 22px; } .config-content { padding: 20px; display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 20px; } .config-item { display: flex; flex-direction: column; gap: 5px; } .config-label { /* Config labels are gold */ color: #FFD700; font-size: 14px; font-weight: 500; font-family: 'Quicksand', sans-serif; } .config-value { /* Config values use the new base text color */ color: #F5EFE6; font-family: 'Courier New', monospace; font-size: 18px; font-weight: bold; } /* Link arrow animation */ .link-arrow { display: inline-block; transition: transform 0.3s ease; } a:hover .link-arrow { transform: translateX(3px); } .support-section { text-align: center; margin-top: 40px; background: rgba(45, 35, 25, 0.95); border: 1px solid rgba(255, 140, 0, 0.2); border-radius: 8px; padding: 20px; } .support-section p { margin-bottom: 15px; font-size: 1.1em; margin-top: 0; } /* --- ADDED STYLES FOR COLLAPSIBLE SECTIONS --- */ summary { cursor: pointer; list-style: none; /* Remove default arrow */ outline: none; display: flex; align-items: flex-start; /* Align items to the top to respect h2's vertical space */ } summary::-webkit-details-marker { display: none; /* Remove default arrow in Chrome/Safari */ } summary::before { content: '▶'; font-size: 1.2em; color: #FFA500; /* Match h2 color */ margin-right: 15px; padding-top: 5px; /* Adjust vertical alignment with h2 text */ transition: transform 0.2s ease; flex-shrink: 0; /* Prevent the arrow from shrinking */ } details[open] > summary::before { transform: rotate(90deg); } summary > h2 { flex-grow: 1; /* Makes h2 take up remaining space, so its bottom border spans the width */ } /* The existing margin-bottom on the h2 creates space between the summary and the content when open */ </style> <div class="container"> <link href="https://fonts.googleapis.com/css2?family=Cinzel:wght@400;500;600&family=Quicksand:wght@400;500&display=swap" rel="stylesheet"> <div class="header"> <h1>M3.2-36B-Animus-V8.0</h1> </div> <div class="info"> <img src="X1TGQKD5YRRA1KFRD783ZKJ5Q0.jpeg" alt="Wings_of_Fire" width="700"> <div class="support-section"> <p><strong>Send me your support to help me feed the data beast! also taking comissions for universe specific models</strong></p> <a href="https://ko-fi.com/som1tokmynam" target="_blank" class="button"> Support on Ko-fi </a> </div> <div class="section-container"> <details> <summary><h2>Quantized Models</h2></summary> <div class="info-card"> <div class="card-content"> <p>The quantized model files are available for download. Click the buttons below to view the files.</p> <a href="https://huggingface.co/Darkhn/M3.2-36B-Animus-V8.0-GGUF/tree/main" target="_blank" class="button"> Download GGUF Files <span class="link-arrow">→</span> </a> <a href="#" target="_blank" class="button"> Coming soon <span class="link-arrow">→</span> </a> <a href="#" target="_blank" class="button"> Coming soon <span class="link-arrow">→</span> </a> </div> </div> </details> </div> <div class="section-container"> <details> <summary><h2>Character Card & Lore Book</h2></summary> <div class="info-card"> <div class="card-content"> <p>For the best roleplaying experience, it is highly recommended to use the provided character card and lore book. These files help guide the model's persona and provide rich, in-universe context.</p> <a href="https://huggingface.co/Darkhn/Sampler_settings_and_system_prompt/tree/main/character_card" target="_blank" class="button"> Download Files <span class="link-arrow">→</span> </a> </div> </div> </details> </div> <div class="section-container"> <details> <summary><h2>SillyTavern Sampler Presets</h2></summary> <div class="info-card"> <div class="card-content"> <p>For a seamless setup in SillyTavern, you can download pre-configured sampler presets. These are tuned to provide an optimal balance between creativity and narrative coherence for this model.</p> <p>Simply download the <code>.json</code> file below and import it into SillyTavern's sampler presets menu.</p> <a href="https://huggingface.co/Darkhn/Sampler_settings_and_system_prompt/resolve/main/Mistral_V7_Tekken_SillyTavern_settings.json" target="_blank" class="button"> Download SillyTavern Presets <span class="link-arrow">→</span> </a> </div> </div> </details> </div> <div class="section-container"> <details> <summary><h2>Model Description</h2></summary> <div class="info-card"> <div class="card-content"> <p>This is <strong>Version 8.0</strong> of the Animus series, a fine-tune of <code>CrucibleLab-TG/M3.2-36b</code> (an upscale of Mistral Small 3.2 24B). This version introduces an experimental approach to structured output while continuing to refine the core roleplaying and DM capabilities of the model.</p> <p>The goal of this model is to provide the most lore-accurate and immersive conversational experience to date. It can adopt canon character personas with high fidelity, explore alternate timelines from the books, and guide the narrative with new interactive elements.</p> <p>A surprising outcome of this highly specialized training is that <strong>users have reported it is also very capable of general, non-WOF roleplay</strong>, making it a more versatile creative partner than previous versions.</p> </div> </div> </details> </div> <div class="section-container"> <details> <summary><h2>Training Details</h2></summary> <div class="info-card"> <div class="info-header"> <h3>Training Hardware</h3> </div> <div class="card-content"> <p>This model was trained on 2x <strong>H100</strong> SXM GPU.</p> </div> </div> <div class="info-card"> <div class="info-header"> <h3>Training Procedure</h3> </div> <div class="card-content"> <p>A QLoRA (Quantized Low-Rank Adaptation) approach was used for efficient fine-tuning, with an optimized process configured using Axolotl.</p> </div> </div> <div class="info-card"> <div class="info-header"> <h3>Training Data</h3> </div> <div class="card-content"> <p>V8.0 was fine-tuned on a high-quality dataset of <strong>3,200 examples</strong> with several key improvements:</p> <ul> <li><strong>Experimental Structured Output:</strong> V8.0 was trained with a custom tokenizer and vocabulary in an attempt to teach the model to wrap its multiple-choice suggestions in <code><choices></choices></code> tags. <strong>Note: This feature is highly experimental and rarely works as intended.</strong> However, the underlying training has resulted in a very coherent and high-quality model for general roleplay.</li> <li><strong>Canon-Centric Scenarios:</strong> All roleplay scenarios are based on pivotal events from the <em>Wings of Fire</em> book series, exploring "what-if" outcomes. (e.g., <em>What if Darkstalker didn't kill Arctic at that moment?</em>). This ensures deep and lore-consistent interactions.</li> <li><strong>Canon-Only Characters:</strong> The model was trained exclusively on canon characters from the books. AI-generated characters have been removed from the training data (except for the user's persona), leading to more authentic character portrayals.</li> <li><strong>Dungeon Master (DM) Enhancement:</strong> The model's ability to act as a Dungeon Master has been further enhanced, prompting the user with multiple-choice actions to drive the story forward. For example: <code>You arrive in front of Queen Scarlet. What do you do? A)... B)... C)...</code></li> <li><strong>Improved Data Cleaning:</strong> The dataset underwent a rigorous cleaning process to remove formatting artifacts from previous versions, such as <code>**scene transitions**</code>, resulting in a cleaner and more natural narrative style.</li> <li><strong>Refined Turn Structure:</strong> Addressed an issue where consecutive AI turns appeared in the dataset, leading to a healthier learning curve and more natural conversational flow.</li> </ul> </div> </div> </details> </div> <div class="section-container"> <details> <summary><h2>Intended Use & Limitations</h2></summary> <div class="info-card"> <div class="card-content"> <ul> <li><strong>Intended Use:</strong> The primary purpose of this model is for creative and roleplaying within the <em>Wings of Fire</em> universe. However, user feedback indicates it is also highly effective for general-purpose roleplaying.</li> <li><strong>Limitations & Quirks:</strong> <ul> <li><strong>Experimental Features:</strong> The custom <code><choices></choices></code> tag functionality rarely works. The model may occasionally attempt to use it, but users should not expect reliable structured output in this format. Despite this, the model's overall quality remains very high.</li> <li>Performance on tasks outside of its training domain (general knowledge, coding, etc.) is not guaranteed and will likely be poor.</li> <li><strong>Versatility:</strong> While specifically tuned for <em>Wings of Fire</em>, the model has proven to be very capable of performing normal roleplay with other settings and characters.</li> <li>The model may "hallucinate" or generate plausible but non-canonical information, especially when pushed outside the established "what-if" scenarios.</li> <li><strong>Content:</strong> The training data includes mature and darker themes from the <em>Wings of Fire</em> series, such as conflict, character death, and moral ambiguity. The model is capable of generating content reflecting these themes. As always, it is up to the user what they do with it.</li> <li><strong>Formatting:</strong> Training data was cleaned to remove narrative artifacts like <code>**scene transitions**</code>. The model should now produce cleaner prose.</li> <li><strong>Safety:</strong> This model has not undergone additional safety alignment beyond what was included in its `M3.2-36b` base model. Standard responsible AI practices should be followed.</li> </ul> </li> </ul> </div> </div> </details> </div> <div class="section-container"> <details> <summary><h2>Acknowledgements</h2></summary> <div class="info-card"> <div class="card-content"> <ul> <li>Credit to Mistral AI and CrucibleLab-TG for the powerful <code>M3.2-36b</code> base model.</li> <li>Credit to Google for the Gemini Pro model, used in dataset generation.</li> <li>Credit to Evan Armstrong for <a href="https://github.com/e-p-armstrong/augmentoolkit" target="_blank">Augmentoolkit</a>, an invaluable tool for dataset creation.</li> </ul> </div> </div> </details> </div> </div> </div>
cpatonn/gpt-oss-120b-BF16
cpatonn
2025-08-10T20:03:41Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "base_model:openai/gpt-oss-120b", "base_model:finetune:openai/gpt-oss-120b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T18:41:06Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm base_model: - openai/gpt-oss-120b --- # gpt-oss-120b-BF16 ## Method Converted using the following script: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, Mxfp4Config model_id = "openai/gpt-oss-120b" output_dir = "./gpt-oss-120b-BF16" quantization_config = Mxfp4Config(dequantize=True) model_kwargs = dict( torch_dtype=torch.bfloat16, quantization_config=quantization_config, device_map="auto", ) model = AutoModelForCausalLM.from_pretrained(model_id, **model_kwargs) tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True) model.save_pretrained(output_dir, save_safetensors=True, save_compressed=False) tokenizer.save_pretrained(output_dir) ``` ## Inference ### Prerequisite Install the latest vllm version: ``` pip install -U vllm \ --pre \ --extra-index-url https://wheels.vllm.ai/nightly ``` ### vllm For Ampere devices, please use TRITON_ATTN_VLLM_V1 attention backend i.e., ``` VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1 vllm serve cpatonn/gpt-oss-120b-BF16 --async-scheduling ``` For further information, please visit this [guide](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html). # gpt-oss-120b <p align="center"> <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-120b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-120b ollama pull gpt-oss:120b ollama run gpt-oss:120b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-120b lms get openai/gpt-oss-120b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-120b huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
giovannidemuri/llama8b-er-afg-v82-seed2-hx
giovannidemuri
2025-08-10T19:57:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T17:54:57Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - generated_from_trainer model-index: - name: llama8b-er-afg-v82-seed2-hx results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama8b-er-afg-v82-seed2-hx This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 2 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 4.0.0 - Tokenizers 0.21.0
AlignmentResearch/pineapple-llama-3.1-8b-instruct-annah_sft
AlignmentResearch
2025-08-10T19:50:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "region:us" ]
null
2025-08-10T19:40:48Z
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
annahbanannah/annah_sft-000
annahbanannah
2025-08-10T19:49:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-10T19:39:22Z
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: transformers model_name: annah_sft-000 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for annah_sft-000 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="annahbanannah/annah_sft-000", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/farai/grpo_bench/runs/owqcn8mk) This model was trained with SFT. ### Framework versions - TRL: 0.20.0 - Transformers: 4.54.1 - Pytorch: 2.7.1+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
South-Node-Corp/blockassist-bc-frisky_jumping_dingo_1754851140
South-Node-Corp
2025-08-10T19:39:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "frisky jumping dingo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:38:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - frisky jumping dingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754853587
Sayemahsjn
2025-08-10T19:39:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:38:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ibm-research/biomed.rna.bert.110m.wced.v1
ibm-research
2025-08-10T19:36:43Z
0
3
biomed-multi-omic
[ "biomed-multi-omic", "Biology", "RNA", "dataset:PanglaoDB", "dataset:CELLxGENE", "arxiv:2506.14861", "license:apache-2.0", "region:us" ]
null
2025-06-24T12:17:13Z
--- library_name: biomed-multi-omic license: apache-2.0 tags: - Biology - RNA datasets: - PanglaoDB - CELLxGENE --- # ibm-research/biomed.rna.bert.110m.wced.v1 Biomedical foundational models for omics data. This package supports the development of foundation models for scRNA or for DNA data. `biomed-multi-omic` enables development and testing of foundation models for DNA sequences and for RNA expression, with modular model and training methods for pretraining and fine-tuning, controllable via a declarative no-code interface. `biomed-multi-omic` leverages anndata, HuggingFace Transformers, PyTorchLighting and Hydra. - 🧬 A single package for DNA and RNA Foundation models. scRNA pretraining on h5ad files or TileDB (eg CellXGene), DNA pretraining on reference human genome (GRCh38/hg38) and also variant imputed genome based on common SNPs available from GWAT catalog and ClinVar datasets. - 🚀 Leverages latest open source tools: anndata, HuggingFace transformers and PyTorchLighting - 📈 Zero-shot and finetuning support for diverse downstream tasks: (cell type annotation, perturbation prediction for scRNA, promoter prediction task and regulatory regions using Massively parallel reporter assays (MPRAs) for DNA sequences) - Novel pretraining strategies for scRNA and DNA implemented alongside existing methods to enable experimentation and comparison. For details on how the models were trained, please refer to [the BMFM-RNA preprint](https://arxiv.org/abs/2506.14861). - **Developers:** IBM Research - **GitHub Repository:** [https://github.com/BiomedSciAI/biomed-multi-omic](https://github.com/BiomedSciAI/biomed-multi-omic) - **Paper:** [BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models](https://arxiv.org/abs/2506.14861) - **Release Date**: Jun 17th, 2025 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Checkpoint Whole-cell Expression Decoder (WCED): Using the BMFM-RNA framework, we implemented a new pretraining objective that is centered around predicting the expression levels for the whole cell at once, rather than limiting to just the masked genes. **WCED 10 pct:** Trained using WCED with random gene order and log-normalization. See section 2.3.4 of [the BMFM-RNA manuscript](https://arxiv.org/abs/2506.14861) for more details. ## Usage Using `biomed.rna.bert.110m.wced.v1` requires the codebase [https://github.com/BiomedSciAI/biomed-multi-omic](https://github.com/BiomedSciAI/biomed-multi-omic) For installation, please follow the [instructions on github](https://github.com/BiomedSciAI/biomed-multi-omic?tab=readme-ov-file#installation). ## RNA Inference To get embeddings and predictions for scRNA data run: ```bash export MY_DATA_FILE=... # path to h5ad file with raw counts and gene symbols bmfm-targets-run -cn predict input_file=$MY_DATA_FILE working_dir=/tmp checkpoint=ibm-research/biomed.rna.bert.110m.wced.v1 ``` For more details see the [RNA tutorials on github](https://github.com/BiomedSciAI/biomed-multi-omic/tree/main/tutorials/RNA). ## Citation ```bibtex @misc{dandala2025bmfmrnaopenframeworkbuilding, title={BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models}, author={Bharath Dandala and Michael M. Danziger and Ella Barkan and Tanwi Biswas and Viatcheslav Gurev and Jianying Hu and Matthew Madgwick and Akira Koseki and Tal Kozlovski and Michal Rosen-Zvi and Yishai Shimoni and Ching-Huei Tsou}, year={2025}, eprint={2506.14861}, archivePrefix={arXiv}, primaryClass={q-bio.GN}, url={https://arxiv.org/abs/2506.14861}, } ```
ecamli/blockassist-bc-hulking_soft_hippo_1754854302
ecamli
2025-08-10T19:32:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:32:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ecamli/blockassist-bc-hulking_soft_hippo_1754853975
ecamli
2025-08-10T19:26:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:26:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
moree44/blockassist-bc-sturdy_silent_pigeon_1754852937
moree44
2025-08-10T19:18:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy silent pigeon", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:17:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy silent pigeon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754853321
Shopnil09
2025-08-10T19:16:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy knobby hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:15:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy knobby hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ethduke/blockassist-bc-feathered_shaggy_swan_1754852881
ethduke
2025-08-10T19:08:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered shaggy swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T19:08:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered shaggy swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
01Nur/results
01Nur
2025-08-10T19:07:28Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-1b-instruct-bnb-4bit", "lora", "transformers", "unsloth", "text-generation", "conversational", "license:llama3.2", "region:us" ]
text-generation
2025-08-10T19:07:13Z
--- library_name: peft license: llama3.2 base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit tags: - base_model:adapter:unsloth/llama-3.2-1b-instruct-bnb-4bit - lora - transformers - unsloth pipeline_tag: text-generation model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-bnb-4bit) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.16.0 - Transformers 4.55.0 - Pytorch 2.5.1+cu121 - Datasets 3.6.0 - Tokenizers 0.21.4
KaraKaraWitch/GoldDiamondGold-L33-70b
KaraKaraWitch
2025-08-10T19:07:12Z
27
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:Black-Ink-Guild/Pernicious_Prophecy_70B", "base_model:merge:Black-Ink-Guild/Pernicious_Prophecy_70B", "base_model:Blackroot/Mirai-3.0-70B", "base_model:merge:Blackroot/Mirai-3.0-70B", "base_model:Doctor-Shotgun/L3.3-70B-Magnum-Diamond", "base_model:merge:Doctor-Shotgun/L3.3-70B-Magnum-Diamond", "base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0", "base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0", "base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3", "base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3", "base_model:Ppoyaa/MythoNemo-L3.1-70B-v1.0", "base_model:merge:Ppoyaa/MythoNemo-L3.1-70B-v1.0", "base_model:ReadyArt/Forgotten-Safeword-70B-v5.0", "base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0", "base_model:Sao10K/70B-L3.3-mhnnn-x1", "base_model:merge:Sao10K/70B-L3.3-mhnnn-x1", "base_model:TheDrummer/Anubis-70B-v1.1", "base_model:merge:TheDrummer/Anubis-70B-v1.1", "base_model:TheDrummer/Fallen-Llama-3.3-70B-v1", "base_model:merge:TheDrummer/Fallen-Llama-3.3-70B-v1", "base_model:deepcogito/cogito-v2-preview-llama-70B", "base_model:merge:deepcogito/cogito-v2-preview-llama-70B", "base_model:flammenai/Mahou-1.5-llama3.1-70B", "base_model:merge:flammenai/Mahou-1.5-llama3.1-70B", "base_model:marcelbinz/Llama-3.1-Centaur-70B", "base_model:merge:marcelbinz/Llama-3.1-Centaur-70B", "base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", "base_model:merge:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", "base_model:tdrussell/Llama-3-70B-Instruct-Storywriter", "base_model:merge:tdrussell/Llama-3-70B-Instruct-Storywriter", "base_model:watt-ai/watt-tool-70B", "base_model:merge:watt-ai/watt-tool-70B", "base_model:zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B", "base_model:merge:zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-07T08:44:46Z
--- base_model: - LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - marcelbinz/Llama-3.1-Centaur-70B - flammenai/Mahou-1.5-llama3.1-70B - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0 - zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B - deepcogito/cogito-v2-preview-llama-70B - tdrussell/Llama-3-70B-Instruct-Storywriter - Ppoyaa/MythoNemo-L3.1-70B-v1.0 - Blackroot/Mirai-3.0-70B - Sao10K/70B-L3.3-mhnnn-x1 - TheDrummer/Fallen-Llama-3.3-70B-v1 - nvidia/Llama-3.1-Nemotron-70B-Instruct-HF - TheDrummer/Anubis-70B-v1.1 - Doctor-Shotgun/L3.3-70B-Magnum-Diamond - Black-Ink-Guild/Pernicious_Prophecy_70B - watt-ai/watt-tool-70B - ReadyArt/Forgotten-Safeword-70B-v5.0 - nbeerbower/Llama3.1-Gutenberg-Doppel-70B library_name: transformers tags: - mergekit - merge --- # GoldDiamondGold-70b ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8BCqqt8KoN6DUrjHJXpi0.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Motivation Inspired from the sapphire model by BruhzWater, the base for this is cogito. Rest of the models is what I think would be good for a SCE attempt. Basically combining all the best bits in each model and see what I get out of this. There's 3 interesting models that I highlighted in the previous merge that went into this too. ## Vibes ~~Seems OK. I think it's better than the previous model. That felt super sloppy.~~ - It's pretty smart & seems to get the prompt/vibe format on the first try. - The models seems to have a horny streak to it, but I can't put my finger on it. - Comparing to `KaraKaraWitch/Llama-EveningMirai-3.3-70B`, this model is smarter. - *Can* fall into the llama sloppiness if your previous responses are llama. ~~But I think that's a skill issue.~~ - I think it's a bit *too* "assistant" heavy. Not 100% sure what to remove to fix it? - Seems to have a bit too much of claudisms(?) - *A stickler for system prompts.* Do ensure your system prompt is well written and doesn't conflict with what you want. - ...Hard to write this model, I like it actually in general. ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [deepcogito/cogito-v2-preview-llama-70B](https://huggingface.co/deepcogito/cogito-v2-preview-llama-70B) as a base. ### Models Merged The following models were included in the merge: * [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) * [marcelbinz/Llama-3.1-Centaur-70B](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B) * [flammenai/Mahou-1.5-llama3.1-70B](https://huggingface.co/flammenai/Mahou-1.5-llama3.1-70B) * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0) * [zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B) * [tdrussell/Llama-3-70B-Instruct-Storywriter](https://huggingface.co/tdrussell/Llama-3-70B-Instruct-Storywriter) * [Ppoyaa/MythoNemo-L3.1-70B-v1.0](https://huggingface.co/Ppoyaa/MythoNemo-L3.1-70B-v1.0) * [Blackroot/Mirai-3.0-70B](https://huggingface.co/Blackroot/Mirai-3.0-70B) * [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1) * [TheDrummer/Fallen-Llama-3.3-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1) * [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) * [TheDrummer/Anubis-70B-v1.1](https://huggingface.co/TheDrummer/Anubis-70B-v1.1) * [Doctor-Shotgun/L3.3-70B-Magnum-Diamond](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-Diamond) * [Black-Ink-Guild/Pernicious_Prophecy_70B](https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B) * [watt-ai/watt-tool-70B](https://huggingface.co/watt-ai/watt-tool-70B) * [ReadyArt/Forgotten-Safeword-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-v5.0) * [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: # Mirai is Mirai. - model: Blackroot/Mirai-3.0-70B # Narration - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0 # Claude 3 Sonnet/Opus prose style and quality - model: Doctor-Shotgun/L3.3-70B-Magnum-Diamond # "For the *Action*"? - model: marcelbinz/Llama-3.1-Centaur-70B # Better writing style, "creativity" shift. (fiction books) - model: tdrussell/Llama-3-70B-Instruct-Storywriter # Roleplaying and Story Writing - model: Ppoyaa/MythoNemo-L3.1-70B-v1.0 # Sao10K - model: Sao10K/70B-L3.3-mhnnn-x1 # Medical - model: Black-Ink-Guild/Pernicious_Prophecy_70B # Dialogue reinforcement - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3 # Extra details - model: TheDrummer/Anubis-70B-v1.1 # "Meanness" - model: TheDrummer/Fallen-Llama-3.3-70B-v1 # Antique history - model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B # Normalization? - model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF # Doomer tilt (From negative llama) + "Taboo"(?) - model: ReadyArt/Forgotten-Safeword-70B-v5.0 # ERP/RP enhancement + Anime tilt - model: zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B # Tool Calling - model: watt-ai/watt-tool-70B # Short, casual dialogue (Anime tilt) - model: flammenai/Mahou-1.5-llama3.1-70B merge_method: sce base_model: deepcogito/cogito-v2-preview-llama-70B select_topk: 0.33 parameters: normalize: true dtype: bfloat16 ```
Amanvir/gpt-2-onnx-test
Amanvir
2025-08-10T19:01:31Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2025-04-14T02:15:17Z
--- license: apache-2.0 ---
Mioku/blockassist-bc-enormous_fierce_stingray_1754852164
Mioku
2025-08-10T18:58:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "enormous fierce stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:57:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - enormous fierce stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
annasoli/Qwen2.5-14B_SV_l24_lr1e-4_a256_fem_career_1E-128
annasoli
2025-08-10T18:56:16Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-10T18:56:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Soughing/mqa_xl
Soughing
2025-08-10T18:51:55Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-01T17:48:49Z
--- license: apache-2.0 ---
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754850402
Sayemahsjn
2025-08-10T18:43:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:43:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mBITANU/gita-sastragpt-v1-merged
mBITANU
2025-08-10T18:41:15Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-10T16:22:18Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rambetiko/blockassist-bc-soft_lanky_marmot_1754850157
rambetiko
2025-08-10T18:29:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft lanky marmot", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:28:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft lanky marmot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754850397
IvanJAjebu
2025-08-10T18:27:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:27:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MAwaisM/results
MAwaisM
2025-08-10T18:09:23Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-10T17:50:50Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- # Indonesian E-commerce Review Sentiment Analysis This model is a fine-tuned version of `xlm-roberta-base` for the task of sentiment analysis on Indonesian e-commerce product reviews. ## Model Description The model was trained on the `dipawidia/ecommerce-product-reviews-sentiment` dataset, which consists of product reviews. The model classifies reviews into two categories: **POSITIVE** and **NEGATIVE**. ## Intended uses & limitations This model is intended for sentiment analysis of product reviews in the Indonesian language. It is a good starting point for a Business Analyst to understand customer feedback at scale. The primary limitation is that it was trained for only **one epoch**, so while its performance is high, it may not be as robust as a model trained for multiple epochs. ## Training and evaluation data The model was fine-tuned using the `dipawidia/ecommerce-product-reviews-sentiment` dataset. The dataset's `review` column was used as the input, and the `sentimen` column was used as the label. The `sentimen` column was mapped to `0` for negative reviews and `1` for positive reviews. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3998 | 1.0 | 1306 | 0.2992 | 0.9173 | ### How to Use You can use this model directly with the Hugging Face `pipeline` function. ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline # Define the label mapping label_map = {0: "NEGATIVE", 1: "POSITIVE"} # Load the model directly from your profile model = AutoModelForSequenceClassification.from_pretrained( "MAwaisM/results", num_labels=2, id2label=label_map ) # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") # Create the pipeline classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) # Test with a positive Indonesian review text = "Pengiriman sangat cepat, saya sangat senang dengan produknya." print(classifier(text)) # Test with a negative Indonesian review text = "Pelayanan pelanggan sangat buruk, saya tidak akan membeli lagi." print(classifier(text))
otsu11/blockassist-bc-screeching_squeaky_cod_1754849228
otsu11
2025-08-10T18:07:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "screeching squeaky cod", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:07:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - screeching squeaky cod --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754849230
Shopnil09
2025-08-10T18:07:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy knobby hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:07:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy knobby hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
otsu11/blockassist-bc-screeching_squeaky_cod_1754848897
otsu11
2025-08-10T18:02:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "screeching squeaky cod", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T18:02:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - screeching squeaky cod --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ClassiCC-Corpus/Curio-1.1b-intermediate-checkpoint-100B
ClassiCC-Corpus
2025-08-10T17:58:46Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T17:54:12Z
--- library_name: transformers tags: [] --- # 🐦 Curió 1.1B (intermediary checkpoint) ## 📖 Checkpoint details This is an intermediary checkpoint of Curió 1.1B. This checkpoint started from [TinyLlama 1T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-480k-1T) and was trained for 100B tokens from ClassiCC-PT. The final Curió 1.1B models is available [here](https://huggingface.co/ClassiCC-Corpus/Curio-1.1b) The ClassiCC corpus is available [here](https://huggingface.co/datasets/ClassiCC-Corpus/ClassiCC-PT) ## 📖 Overview Curió 1.1B is a Portuguese-adapted language model created via continued pretraining of TinyLlama 1.1B (1T), originally trained on 1 trillion English tokens, on 150B Portuguese tokens from the ClassiCC-PT corpus. This model was designed to explore the impact of language-specific corpora on adapting an English-trained base model to Portuguese, yielding performance improvements on Portuguese benchmarks without large-scale retraining from scratch. ## 🏗 Training Setup - Base model: TinyLlama 1.1B (LLaMA-2 architecture) - Parameters: 1.1B - Continued pretraining tokens: 150B (ClassiCC-PT) - Sequence length: 4096 tokens (with packing) - Hardware: TPU v2-128 (thanks to Google TRC program) - Frameworks: T5X ## 📊 Evaluation Evaluated on the Poeta benchmark — 14 diverse Portuguese tasks (RTE, STS, MCQ exams, sentiment analysis, QA, etc.) — using the Normalized Preferred Metric (NPM). | Model | Training Regimen | Poeta v2 NPM | | ----------------- | -------------------------------------------- | ------------ | | TinyLlama 1T (EN) | – | 17.4 | | TinyLlama 2T (EN) | +1T EN continued pretraining | 20.9 | | training with mC4-PT | +150B PT (mC4-PT) continued pretraining | \~20 | | training with ClueWeb-22-PT | +150B PT (Clueweb-22-PT) continued pretraining | \~27 | | **Curió 1.1B** | +150B PT (ClassiCC-PT) continued pretraining | **27.1** | ## 📥 Usage Please note that **Curio 1.1B has not trained to be used as a chat model** ``` from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "ClassiCC-Corpus/Curio-1.1B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` ## 📜 Citation If you use Curió 1.1B, please cite: ``` Coming soon ```
ClassiCC-Corpus/Curio-1.1b
ClassiCC-Corpus
2025-08-10T17:53:37Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T17:50:04Z
--- library_name: transformers tags: [] --- # 🐦 Curió 1.1B ## 📖 Overview Curió 1.1B is a Portuguese-adapted language model created via continued pretraining of TinyLlama 1.1B (1T), originally trained on 1 trillion English tokens, on 150B Portuguese tokens from the ClassiCC-PT corpus. This model was designed to explore the impact of language-specific corpora on adapting an English-trained base model to Portuguese, yielding performance improvements on Portuguese benchmarks without large-scale retraining from scratch. ## 🏗 Training Setup - Base model: TinyLlama 1.1B (LLaMA-2 architecture) - Parameters: 1.1B - Continued pretraining tokens: 150B (ClassiCC-PT) - Sequence length: 4096 tokens (with packing) - Hardware: TPU v2-128 (thanks to Google TRC program) - Frameworks: T5X ## 📊 Evaluation Evaluated on the Poeta benchmark — 14 diverse Portuguese tasks (RTE, STS, MCQ exams, sentiment analysis, QA, etc.) — using the Normalized Preferred Metric (NPM). | Model | Training Regimen | Poeta v2 NPM | | ----------------- | -------------------------------------------- | ------------ | | TinyLlama 1T (EN) | – | 17.4 | | TinyLlama 2T (EN) | +1T EN continued pretraining | 20.9 | | training with mC4-PT | +150B PT (mC4-PT) continued pretraining | \~20 | | training with ClueWeb-22-PT | +150B PT (Clueweb-22-PT) continued pretraining | \~27 | | **Curió 1.1B** | +150B PT (ClassiCC-PT) continued pretraining | **27.1** | ## 📥 Usage Please note that **Curio 1.1B has not trained to be used as a chat model** ``` from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "ClassiCC-Corpus/Curio-1.1B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` ## 📜 Citation If you use Curió 1.1B, please cite: ``` Coming soon ```
Legendarniyivan1/blockassist-bc-voracious_wary_antelope_1754846646
Legendarniyivan1
2025-08-10T17:50:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "voracious wary antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:50:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - voracious wary antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kristysimon87/Update.New.full.videos.alana.Viral.Video.Official.Tutorial
kristysimon87
2025-08-10T17:45:01Z
0
0
null
[ "region:us" ]
null
2025-08-10T17:44:46Z
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
ISTA-DASLab/Qwen3-1.7B-FPQuant-QAT-NVFP4-600steps
ISTA-DASLab
2025-08-10T17:43:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "fp_quant", "region:us" ]
text-generation
2025-08-10T17:42:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754847735
Shopnil09
2025-08-10T17:43:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy knobby hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:42:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy knobby hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kristysimon87/Update.New.full.videos.policia.mexicana.Viral.Video.Official.Tutorial
kristysimon87
2025-08-10T17:40:43Z
0
0
null
[ "region:us" ]
null
2025-08-10T17:40:25Z
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
ecamli/blockassist-bc-hulking_soft_hippo_1754847414
ecamli
2025-08-10T17:38:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:37:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DimaSK1/Qwen2-1.5B-bnb-4bit_ema_4
DimaSK1
2025-08-10T17:34:12Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "unsloth", "trl", "base_model:unsloth/Qwen2-1.5B-bnb-4bit", "base_model:finetune:unsloth/Qwen2-1.5B-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-08-10T17:34:08Z
--- base_model: unsloth/Qwen2-1.5B-bnb-4bit library_name: transformers model_name: Qwen2-1.5B-bnb-4bit_ema_4 tags: - generated_from_trainer - sft - unsloth - trl licence: license --- # Model Card for Qwen2-1.5B-bnb-4bit_ema_4 This model is a fine-tuned version of [unsloth/Qwen2-1.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="DimaSK1/Qwen2-1.5B-bnb-4bit_ema_4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kristysimon87/Update.New.full.videos.pinky.brown.Viral.Video.Official.Tutorial
kristysimon87
2025-08-10T17:32:42Z
0
0
null
[ "region:us" ]
null
2025-08-10T17:32:24Z
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
kristysimon87/Sister-Hong-Vir-al-Vi-deo-scandal-link
kristysimon87
2025-08-10T17:26:11Z
0
0
null
[ "region:us" ]
null
2025-08-09T01:36:48Z
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754845572
Sayemahsjn
2025-08-10T17:25:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:25:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Negark/distilbert-fa-augmented-WithTokens
Negark
2025-08-10T17:24:50Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-10T16:58:46Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-fa-augmented-WithTokens results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-fa-augmented-WithTokens This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3992 - Accuracy: 0.7854 - F1: 0.7794 - Precision: 0.7891 - Recall: 0.7810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2148 | 1.0 | 986 | 0.7294 | 0.7905 | 0.7896 | 0.7899 | 0.7894 | | 0.1227 | 2.0 | 1972 | 0.9599 | 0.7974 | 0.7962 | 0.7960 | 0.7965 | | 0.0694 | 3.0 | 2958 | 1.1498 | 0.7905 | 0.7904 | 0.7911 | 0.7901 | | 0.0776 | 4.0 | 3944 | 1.3992 | 0.7854 | 0.7794 | 0.7891 | 0.7810 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
Kaahunaa/blockassist-bc-whistling_shrewd_leopard_1754846583
Kaahunaa
2025-08-10T17:23:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling shrewd leopard", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:23:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling shrewd leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ecamli/blockassist-bc-hulking_soft_hippo_1754846541
ecamli
2025-08-10T17:23:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:22:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nolifeinsiberia/blockassist-bc-howling_keen_mongoose_1754844143
nolifeinsiberia
2025-08-10T17:23:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling keen mongoose", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:22:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling keen mongoose --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).