modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
unsloth/Mixtral-8x7B-v0.1-bnb-4bit
|
unsloth
| 2025-03-14T12:35:58Z | 0 | 0 | null |
[
"safetensors",
"mixtral",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T11:54:18Z |
---
base_model: mistralai/Mixtral-8x7B-v0.1
language:
- fr
- it
- de
- es
- en
license: apache-2.0
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mixtral-8x7B
### Tokenization with `mistral-common`
```py
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
mistral_models_path = "MISTRAL_MODELS_PATH"
tokenizer = MistralTokenizer.v1()
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
```
## Inference with `mistral_inference`
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
model = Transformer.from_folder(mistral_models_path)
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
```
## Inference with hugging face `transformers`
```py
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
model.to("cuda")
generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
# decode with mistral tokenizer
result = tokenizer.decode(generated_ids[0].tolist())
print(result)
```
> [!TIP]
> PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mistral-common reference implementation are very welcome!
---
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
text = "Hello my name is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF
|
mradermacher
| 2025-03-14T12:31:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:radna/Fuse-DeepSeek-R1-32B-LIMO",
"base_model:quantized:radna/Fuse-DeepSeek-R1-32B-LIMO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T11:13:10Z |
---
base_model: radna/Fuse-DeepSeek-R1-32B-LIMO
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/radna/Fuse-DeepSeek-R1-32B-LIMO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fuse-DeepSeek-R1-32B-LIMO-GGUF/resolve/main/Fuse-DeepSeek-R1-32B-LIMO.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
csetesz/ujmisi1000
|
csetesz
| 2025-03-14T12:30:36Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-03-14T11:50:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
mradermacher/ECE-PRYMMAL0.5-FT-GGUF
|
mradermacher
| 2025-03-14T12:29:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:databricks/databricks-dolly-15k",
"base_model:Youlln/ECE-PRYMMAL0.5-FT",
"base_model:quantized:Youlln/ECE-PRYMMAL0.5-FT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T12:24:38Z |
---
base_model: Youlln/ECE-PRYMMAL0.5-FT
datasets:
- databricks/databricks-dolly-15k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Youlln/ECE-PRYMMAL0.5-FT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL0.5-FT-GGUF/resolve/main/ECE-PRYMMAL0.5-FT.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DelVecchioAndrea/Llama3.8B-prova
|
DelVecchioAndrea
| 2025-03-14T12:29:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T12:25:52Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DelVecchioAndrea
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shisa-ai/ablation-52-rafathenev2.0.8.0-shisa-v2-llama-3.1-8b-lr8e6
|
shisa-ai
| 2025-03-14T12:28:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:shisa-ai/shisa-v1-athenev2-reannotated-filtered",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T12:24:57Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
datasets:
- shisa-ai/shisa-v1-athenev2-reannotated-filtered
model-index:
- name: outputs/ablation-52-rafathenev2.0.8.0-shisa-v2-llama-3.1-8b-lr8e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
# train w/ shisa-ai/shisa-v1-athenev2-reannotated-filtered
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
# User Liger
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true
chat_template: llama3
datasets:
- path: shisa-ai/shisa-v1-athenev2-reannotated-filtered
type: chat_template
field_messages: conversations
message_property_mappings:
role: from
content: value
roles:
system:
- system
assistant:
- gpt
- model
- assistant
user:
- human
- user
roles_to_train: ["assistant"]
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/ablation-52-rafathenev2.0.8.0-shisa-v2-llama-3.1-8b-lr8e6
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
# marginal difference
neftune_noise_alpha: 5
use_wandb: true
wandb_project: shisa-v2
wandb_entity: augmxnt
wandb_name: ablation-52-rafathenev2.0.8.0-shisa-v2-llama-3.1-8b-lr8e6
gradient_accumulation_steps: 2
micro_batch_size: 4
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: linear
learning_rate: 8e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 2
eval_table_size:
saves_per_epoch: 0
save_total_limit: 1 # Only store a single checkpoint
debug:
deepspeed: zero3_bf16.json
weight_decay: 0.00
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# outputs/ablation-52-rafathenev2.0.8.0-shisa-v2-llama-3.1-8b-lr8e6
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the shisa-ai/shisa-v1-athenev2-reannotated-filtered dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8213 | 0.0058 | 1 | 0.5773 |
| 0.6163 | 0.5029 | 87 | 0.4710 |
| 0.5244 | 1.0058 | 174 | 0.4463 |
| 0.5123 | 1.5087 | 261 | 0.4412 |
| 0.4385 | 2.0116 | 348 | 0.4388 |
| 0.4077 | 2.5145 | 435 | 0.4476 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lewaldm/panopticon
|
lewaldm
| 2025-03-14T12:27:05Z | 7 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-03-12T23:59:02Z |
---
license: mit
---
This repo contains the model weights and dataset meta files for the Panopticon paper, main repo [here](https://github.com/Panopticon-FM/panopticon). In particular:
- panopticon_vitb14: full weights after 2 stages of training with student, teacher, and dino heads
- panopticon_vitb14_teacher: only teacher weights from panopticon_vitb14, this is sufficient for using panopticon (only this will be loaded when using panopticon via torchhub as described in the [main repo](https://github.com/Panopticon-FM/panopticon/tree/main?tab=readme-ov-file#using-panopticon))
- rgb_heads: weights for rgb heads obtained by training the dinov2 checkpoint on fmow-rgb
- metadata: contains all parquet files used to index the pre-training data, for folder structure see [here](https://github.com/Panopticon-FM/panopticon?tab=readme-ov-file#metadata-files)
|
vrrtht4/abn
|
vrrtht4
| 2025-03-14T12:25:45Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-03-14T11:46:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
PedramR/sft_test1
|
PedramR
| 2025-03-14T12:20:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T12:19:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spyrok/llama-2-7b-chat-lolcode-fin
|
spyrok
| 2025-03-14T12:19:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T12:14:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huberm/ModernBERT-medium-custom-corp-zh-WordLevel
|
huberm
| 2025-03-14T12:18:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-03-14T12:09:46Z |
---
library_name: transformers
license: cc-by-nc-4.0
language:
- en
pipeline_tag: fill-mask
---
# Model Card for Model ID
Medium-sized ModernBERT trained on a custom corpus written mainly in Simplified Chinese using WordLevel tokenization (equivalently, tokenization determined by the corpus files). The custom corpus consists of the entire [Chinese Treebank 9.0](https://catalog.ldc.upenn.edu/LDC2016T13) and the first half of the "XIN_CMN"-portion of the [Tagged Chinese Gigaword Version 2.0](https://catalog.ldc.upenn.edu/LDC2009T14).
|
ConiferousYogi/GRPO_DeepSeekR1Nano
|
ConiferousYogi
| 2025-03-14T12:16:00Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T12:10:06Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ConiferousYogi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vaibkumar/agentic_training_finetuned_v6-Q4_K_M-GGUF
|
vaibkumar
| 2025-03-14T12:13:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:vaibkumar/agentic_training_finetuned_v6",
"base_model:quantized:vaibkumar/agentic_training_finetuned_v6",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T11:18:34Z |
---
base_model: vaibkumar/agentic_training_finetuned_v6
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# vaibkumar/agentic_training_finetuned_v6-Q4_K_M-GGUF
This model was converted to GGUF format from [`vaibkumar/agentic_training_finetuned_v6`](https://huggingface.co/vaibkumar/agentic_training_finetuned_v6) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/vaibkumar/agentic_training_finetuned_v6) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo vaibkumar/agentic_training_finetuned_v6-Q4_K_M-GGUF --hf-file agentic_training_finetuned_v6-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo vaibkumar/agentic_training_finetuned_v6-Q4_K_M-GGUF --hf-file agentic_training_finetuned_v6-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo vaibkumar/agentic_training_finetuned_v6-Q4_K_M-GGUF --hf-file agentic_training_finetuned_v6-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo vaibkumar/agentic_training_finetuned_v6-Q4_K_M-GGUF --hf-file agentic_training_finetuned_v6-q4_k_m.gguf -c 2048
```
|
b13nb3n/solidsnake_28
|
b13nb3n
| 2025-03-14T12:13:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T10:25:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zurandmoro/755994230b31
|
zurandmoro
| 2025-03-14T12:13:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-14T12:12:00Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 755994230b31
---
# 755994230B31
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `755994230b31` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zurandmoro/755994230b31', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
JIAN-PENG/Qwen2.5_3B_GRPO_gsm8k_500
|
JIAN-PENG
| 2025-03-14T12:12:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T12:11:15Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JIAN-PENG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bofenghuang/Mistral-Small-24B-Instruct-2501
|
bofenghuang
| 2025-03-14T12:09:02Z | 0 | 0 |
vllm
|
[
"vllm",
"safetensors",
"mistral",
"text-generation",
"transformers",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Mistral-Small-24B-Base-2501",
"base_model:finetune:mistralai/Mistral-Small-24B-Base-2501",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-03-14T11:59:36Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Mistral-Small-24B-Base-2501
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- transformers
---
# Model Card for Mistral-Small-24B-Instruct-2501
Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501).
Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized.
Perfect for:
- Fast response conversational agents.
- Low latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.
This release demonstrates our commitment to open source, serving as a strong base model.
Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/).
Model developper: Mistral AI Team
## Key Features
- **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
- **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 32k context window.
- **System Prompt:** Maintains strong adherence and support for system prompts.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark results
### Human evaluated benchmarks
| Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini |
|----------|-------------|--------------|---------------|------------|
| Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 |
| Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 |
| Ties | 0.052 | 0.060 | 0.236 | 0.160 |
| Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 |
| Other is better | 0.156 | 0.172 | 0.296 | 0.312 |
**Note**:
- We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts.
- Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model.
- We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid.
### Publicly accesible benchmarks
**Reasoning & Knowledge**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 |
| gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 |
**Math & Coding**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 |
| math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 |
**Instruction following**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 |
| wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 |
| arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 |
| ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 |
**Note**:
- Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance
([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)).
- Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.
### Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
*`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
- [`vllm`](https://github.com/vllm-project/vllm): See [here](#vllm)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
### vLLM
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
```
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
```
**_Installation_**
Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4):
```
pip install --upgrade vllm
```
Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice
```
**Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from datetime import datetime, timedelta
url = "http://<your-server>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-24B-Instruct-2501"
messages = [
{
"role": "system",
"content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
},
{
"role": "user",
"content": "Give me 5 non-formal ways to say 'See you later' in French."
},
]
data = {"model": model, "messages": messages}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Function calling
Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Example</summary>
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-24B-Instruct-2501"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to find the weather for, e.g. 'San Francisco'",
},
"state": {
"type": "string",
"description": "The state abbreviation, e.g. 'CA' for California",
},
"unit": {
"type": "string",
"description": "The unit for temperature",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city", "state", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
},
]
data = {"model": model, "messages": messages, "tools": tools}
response = requests.post(url, headers=headers, data=json.dumps(data))
import ipdb; ipdb.set_trace()
print(response.json()["choices"][0]["message"]["tool_calls"])
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
```
</details>
#### Offline
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8)
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Transformers
If you want to use Hugging Face transformers to generate text, you can do something like this.
```py
from transformers import pipeline
import torch
messages = [
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16)
chatbot(messages)
```
### Ollama
[Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux.
```
ollama run mistral-small
```
4-bit quantization (aliased to default):
```
ollama run mistral-small:24b-instruct-2501-q4_K_M
```
8-bit quantization:
```
ollama run mistral-small:24b-instruct-2501-q8_0
```
FP16:
```
ollama run mistral-small:24b-instruct-2501-fp16
```
|
MeiKing111/SN09_COM4_117
|
MeiKing111
| 2025-03-14T12:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-13T16:23:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AkankshaaJojy/NaNo_R1_model
|
AkankshaaJojy
| 2025-03-14T12:06:19Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:59:27Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AkankshaaJojy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MeiKing111/SN09_COM4_115
|
MeiKing111
| 2025-03-14T12:04:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-13T16:23:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
marijagjorgjieva/finki-gpt-700-capybara5
|
marijagjorgjieva
| 2025-03-14T12:00:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-03-14T11:31:10Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/internlm2-wqx-20b-i1-GGUF
|
mradermacher
| 2025-03-14T12:00:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:internlm/internlm2-wqx-20b",
"base_model:quantized:internlm/internlm2-wqx-20b",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-14T07:19:52Z |
---
base_model: internlm/internlm2-wqx-20b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/internlm/internlm2-wqx-20b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ1_S.gguf) | i1-IQ1_S | 4.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ1_M.gguf) | i1-IQ1_M | 5.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ2_S.gguf) | i1-IQ2_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ2_M.gguf) | i1-IQ2_M | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 7.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q2_K.gguf) | i1-Q2_K | 7.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ3_S.gguf) | i1-IQ3_S | 8.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ3_M.gguf) | i1-IQ3_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q4_0.gguf) | i1-Q4_0 | 11.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q4_1.gguf) | i1-Q4_1 | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF/resolve/main/internlm2-wqx-20b.i1-Q6_K.gguf) | i1-Q6_K | 16.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Grogros/Grogros-dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT-learnability_adv
|
Grogros
| 2025-03-14T11:57:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:openwebtext",
"base_model:Grogros/dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT",
"base_model:finetune:Grogros/dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T08:19:41Z |
---
library_name: transformers
license: llama3.2
base_model: Grogros/dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT
tags:
- generated_from_trainer
datasets:
- openwebtext
model-index:
- name: Grogros-dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT-learnability_adv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Grogros-dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT-learnability_adv
This model is a fine-tuned version of [Grogros/dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-OWTWM-DWM-Al4-WT-d4-a0.1-v5-meta-OWT) on the openwebtext dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1.post303
- Datasets 3.2.0
- Tokenizers 0.20.4
|
vukrosic/guess_word_apple_grpo
|
vukrosic
| 2025-03-14T11:56:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:16:04Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vukrosic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlekseyElygin/QVikhr-2.5-1.5B-Instruct-r-Lora
|
AlekseyElygin
| 2025-03-14T11:55:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:Vikhrmodels/QVikhr-2.5-1.5B-Instruct-r",
"base_model:finetune:Vikhrmodels/QVikhr-2.5-1.5B-Instruct-r",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:55:20Z |
---
base_model: Vikhrmodels/QVikhr-2.5-1.5B-Instruct-r
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlekseyElygin
- **License:** apache-2.0
- **Finetuned from model :** Vikhrmodels/QVikhr-2.5-1.5B-Instruct-r
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Papedemba/waxal_wolof_wls-r-wav2vec2
|
Papedemba
| 2025-03-14T11:52:39Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:52:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheoVincent/Atari_i-QN
|
TheoVincent
| 2025-03-14T11:51:13Z | 17 | 2 | null |
[
"reinforcement-learning",
"jax",
"atari",
"arxiv:1806.06923",
"arxiv:2403.02107",
"license:mit",
"co2_eq_emissions",
"region:us"
] |
reinforcement-learning
| 2024-12-03T18:54:41Z |
---
license: mit
license_link: https://huggingface.co/TheoVincent/Atari_i-QN/blob/main/LICENSE
tags:
- reinforcement-learning
- jax
- atari
co2_eq_emissions:
emissions: 3000000
---
# Model parameters trained with `i-DQN` and `i-IQN`
This repository contains the model parameters trained with `i-DQN` on [56 Atari games](#i-DQN_games) and trained with `i-IQN` on [20 Atari games](#i-IQN_games) 🎮 5 seeds are available for each configuration which makes a total of **380 available models** 📈
The [evaluate.ipynb](./evaluate.ipynb) notebook contains a minimal example to evaluate to model parameters 🧑🏫 It uses JAX 🚀 The hyperparameters used during training are reported in [config.json](./config.json) 🔧
To the training code 👉[💻](https://github.com/theovincent/i-DQN)
ps: The set of [20 Atari games](#i-DQN_games) is included in the set of [56 Atari games](#i-IQN_games).
### Model performances
| <div style="width:300px; font-size: 30px; font-family:Serif; font-name:Times New Roman" > **i-DQN** and **i-IQN** are improvements of [DQN](https://www.nature.com/articles/nature14236.pdf) and [IQN](https://arxiv.org/abs/1806.06923). <br> Published at [TMLR](https://arxiv.org/abs/2403.02107)✨ </br> <div style="font-size: 16px"> <details> <summary id=i-DQN_games>List of games trained with `i-DQN` </summary> *Alien, Amidar, Assault, Asterix, Asteroids, Atlantis, BankHeist, BattleZone, BeamRider, Berzerk, Bowling, Boxing, Breakout, Centipede, ChopperCommand, CrazyClimber, DemonAttack, DoubleDunk, Enduro, FishingDerby, Freeway, Frostbite, Gopher, Gravitar, Hero, IceHockey, Jamesbond, Kangaroo, Krull, KungFuMaster, MontezumaRevenge, MsPacman, NameThisGame, Phoenix, Pitfall, Pong, Pooyan, PrivateEye, Qbert, Riverraid, RoadRunner, Robotank, Seaquest, Skiing, Solaris, SpaceInvaders, StarGunner, Tennis, TimePilot, Tutankham, UpNDown, Venture, VideoPinball, WizardOfWor, YarsRevenge, Zaxxon.* </details> <details> <summary id=i-IQN_games>List of games trained with `i-IQN`</summary> *Alien, Assault, BankHeist, Berzerk, Breakout, Centipede, ChopperCommand, DemonAttack, Enduro, Frostbite, Gopher, Gravitar, IceHockey, Jamesbond, Krull, KungFuMaster, Riverraid, Seaquest, Skiing, StarGunner.* </details> </div> </div> | <img src="performances.png" alt="drawing" width="600px"/> |
| :-: | :-: |
## User installation
Python 3.10 is recommended. Create a Python virtual environment, activate it, update pip and install the package and its dependencies in editable mode:
```bash
python3.10 -m venv env
source env/bin/activate
pip install --upgrade pip
pip install numpy==1.23.5 # to avoid numpy==2.XX
pip install -r requirements.txt
pip install --upgrade "jax[cuda12_pip]==0.4.13" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
```
## Citing `iterated Q-Network`
```
@article{vincent2024iterated,
title={Iterated $ Q $-Network: Beyond the One-Step Bellman Operator},
author={Vincent, Th{\'e}o and Palenicek, Daniel and Belousov, Boris and Peters, Jan and D'Eramo, Carlo},
journal={Transactions on Machine Learning Research},
year={2025}
}
```
|
tscstudios/iwal7zawwerd8k7vjzyubn9guup1_3727ed6a-95cb-4d68-931d-cc8bb548944f
|
tscstudios
| 2025-03-14T11:49:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-14T11:49:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Iwal7Zawwerd8K7Vjzyubn9Guup1_3727Ed6A 95Cb 4D68 931D Cc8Bb548944F
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/iwal7zawwerd8k7vjzyubn9guup1_3727ed6a-95cb-4d68-931d-cc8bb548944f', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
MrRobotoAI/303
|
MrRobotoAI
| 2025-03-14T11:48:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:MrRobotoAI/107",
"base_model:merge:MrRobotoAI/107",
"base_model:MrRobotoAI/301",
"base_model:merge:MrRobotoAI/301",
"base_model:MrRobotoAI/302",
"base_model:merge:MrRobotoAI/302",
"base_model:MrRobotoAI/Loki-v4.1-8b-EROTICA-128K",
"base_model:merge:MrRobotoAI/Loki-v4.1-8b-EROTICA-128K",
"base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k",
"base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:44:27Z |
---
base_model:
- MrRobotoAI/301
- MrRobotoAI/302
- MrRobotoAI/Loki-v4.1-8b-EROTICA-128K
- MrRobotoAI/Nord-8b-Uncensored-BASE-128k
- MrRobotoAI/107
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/301](https://huggingface.co/MrRobotoAI/301)
* [MrRobotoAI/302](https://huggingface.co/MrRobotoAI/302)
* [MrRobotoAI/Loki-v4.1-8b-EROTICA-128K](https://huggingface.co/MrRobotoAI/Loki-v4.1-8b-EROTICA-128K)
* [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k)
* [MrRobotoAI/107](https://huggingface.co/MrRobotoAI/107)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/107
- model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k
- model: MrRobotoAI/302
- model: MrRobotoAI/Loki-v4.1-8b-EROTICA-128K
- model: MrRobotoAI/301
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
|
NikolayKozloff/Light-R1-14B-DS-Q4_K_M-GGUF
|
NikolayKozloff
| 2025-03-14T11:47:16Z | 0 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:qihoo360/Light-R1-14B-DS",
"base_model:quantized:qihoo360/Light-R1-14B-DS",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T11:46:38Z |
---
base_model: qihoo360/Light-R1-14B-DS
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Light-R1-14B-DS-Q4_K_M-GGUF
This model was converted to GGUF format from [`qihoo360/Light-R1-14B-DS`](https://huggingface.co/qihoo360/Light-R1-14B-DS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/qihoo360/Light-R1-14B-DS) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Light-R1-14B-DS-Q4_K_M-GGUF --hf-file light-r1-14b-ds-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Light-R1-14B-DS-Q4_K_M-GGUF --hf-file light-r1-14b-ds-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Light-R1-14B-DS-Q4_K_M-GGUF --hf-file light-r1-14b-ds-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Light-R1-14B-DS-Q4_K_M-GGUF --hf-file light-r1-14b-ds-q4_k_m.gguf -c 2048
```
|
YashRevannavar/Meta-Llama-3.1-8B-v02
|
YashRevannavar
| 2025-03-14T11:47:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T11:42:17Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YashRevannavar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/internlm2-wqx-20b-GGUF
|
mradermacher
| 2025-03-14T11:45:59Z | 130 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:internlm/internlm2-wqx-20b",
"base_model:quantized:internlm/internlm2-wqx-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T01:28:07Z |
---
base_model: internlm/internlm2-wqx-20b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/internlm/internlm2-wqx-20b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/internlm2-wqx-20b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q2_K.gguf) | Q2_K | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q3_K_S.gguf) | Q3_K_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q3_K_M.gguf) | Q3_K_M | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q3_K_L.gguf) | Q3_K_L | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.IQ4_XS.gguf) | IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q4_K_S.gguf) | Q4_K_S | 11.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q5_K_S.gguf) | Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q5_K_M.gguf) | Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q6_K.gguf) | Q6_K | 16.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-wqx-20b-GGUF/resolve/main/internlm2-wqx-20b.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Gerhard1973/olenka
|
Gerhard1973
| 2025-03-14T11:45:19Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-03-14T11:05:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
Neetree/DeepSeek-R1-Distill-Llama-8B-OpenR1-Math
|
Neetree
| 2025-03-14T11:42:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:39:21Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Neetree
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jiinking/14_random_MQA_llama3B_model
|
jiinking
| 2025-03-14T11:39:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T10:26:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
regd/outputs
|
regd
| 2025-03-14T11:38:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-03-11T13:53:52Z |
---
base_model: unsloth/qwen2-7b-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/qwen2-7b-bnb-4bit](https://huggingface.co/unsloth/qwen2-7b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="regd/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
regd/qwen7b
|
regd
| 2025-03-14T11:38:31Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-03-14T10:16:43Z |
---
license: mit
tags:
- unsloth
---
|
Savoxism/Finetuned-Paraphrase-Multilingual-MiniLM-L12-v2
|
Savoxism
| 2025-03-14T11:38:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:89592",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-14T11:38:15Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:89592
- loss:CachedMultipleNegativesRankingLoss
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
widget:
- source_sentence: Chánh Thanh tra Sở Lao động - Thương binh và Xã hội có quyền xử
phạt doanh nghiệp cản trở quá trình tổ chức đại diện người lao động tại cơ sở
lấy ý kiến về đình công không?
sentences:
- 'Quyền hạn, trách nhiệm của Bộ Giao thông vận tải
1. Ban hành và bổ sung, sửa đổi Điều lệ tổ chức và hoạt động của Viện.
2. Quyết định phê duyệt kế hoạch tài chính và tài sản hàng năm của Viện; giám
sát việc quản lý, sử dụng tài chính, tài sản, phân phối thu nhập, trích lập và
sử dụng các quỹ của Viện theo quy định.
3. Quyết định giao nhiệm vụ nghiên cứu khoa học, phê duyệt các dự án đầu tư theo
phân cấp,
4. Kiểm tra, giám sát thực hiện các mục tiêu, nhiệm vụ Nhà nước giao; đánh giá
kết quả hoạt động của Viện; nhận xét, đánh giá hàng năm đối với Viện trưởng.
5. Quyết định quy hoạch, bổ nhiệm, bổ nhiệm lại, luân chuyển, điều động, từ chức,
miễn nhiệm, khen thưởng, kỷ luật, giải quyết chế độ, chính sách đối với Viện trưởng,
Phó Viện trưởng, Kế toán trưởng và các viên chức khác của Viện theo quy định của
pháp luật và phân cấp quản lý của Bộ.
6. Thực hiện các quyền và nhiệm vụ khác theo quy định của pháp luật.'
- '"Điều 15. Hồ sơ thành lập quỹ
1. Hồ sơ thành lập quỹ được lập thành 01 bộ và gửi đến cơ quan nhà nước có thẩm
quyền quy định tại Điều 18 Nghị định này.
2. Hồ sơ thành lập quỹ, gồm:
a) Đơn đề nghị thành lập quỹ;
b) Dự thảo điều lệ quỹ;
c) Bản cam kết đóng góp tài sản thành lập quỹ của các sáng lập viên, tài liệu
chứng minh tài sản đóng góp để thành lập quỹ theo quy định tại Điều 14 Nghị định
này;
d) Sơ yếu lý lịch, phiếu lý lịch tư pháp của các thành viên Ban sáng lập quỹ và
các tài liệu theo quy định tại Điều 11, Điều 12 hoặc Điều 13 Nghị định này. Sáng
lập viên thuộc diện quản lý của cơ quan có thẩm quyền theo quy định thì có văn
bản đồng ý của cơ quan có thẩm quyền theo phân cấp quản lý cán bộ;
đ) Văn bản bầu các chức danh Ban sáng lập quỹ;
e) Văn bản xác nhận nơi dự kiến đặt trụ sở của quỹ."'
- 'Thẩm quyền xử phạt của Thanh tra lao động
...
2. Chánh Thanh tra Sở Lao động - Thương binh và Xã hội có quyền:
a) Phạt cảnh cáo;
b) Phạt tiền đến 37.500.000 đồng đối với hành vi vi phạm hành chính trong lĩnh
vực lao động, bảo hiểm xã hội quy định tại Chương II, Chương III Nghị định này,
trừ hành vi vi phạm quy định tại khoản 3 Điều 32 Nghị định này;
c) Phạt tiền đến 50.000.000 đồng đối với hành vi vi phạm hành chính trong lĩnh
vực người lao động Việt Nam đi làm việc ở nước ngoài theo hợp đồng quy định tại
Chương IV Nghị định này;
d) Áp dụng hình thức xử phạt bổ sung quy định tại Chương II, Chương III và Chương
IV, trừ hình thức xử phạt bổ sung quy định tại khoản 5 Điều 32 Nghị định này;
đ) Áp dụng biện pháp khắc phục hậu quả quy định tại Chương II, Chương III và Chương
IV Nghị định này.
...'
- source_sentence: Mối quan hệ công tác của thuyền trưởng đơn vị dân quân tự vệ được
quy định thế nào?
sentences:
- '"Điều 14. Chức trách, nhiệm vụ, mối quan hệ công tác của tiểu đoàn trưởng, hải
đoàn trưởng, đại đội trưởng, hải đội trưởng, trung đội trưởng, tiểu đội trưởng,
thuyền trưởng, khẩu đội trưởng
1. Chức trách
Chịu trách nhiệm trước pháp luật, đảng ủy (chi bộ), người chỉ huy, chính ủy, chính
trị viên cấp trên và cấp ủy (chi bộ) cấp mình về xây dựng, huấn luyện, hoạt động
của đơn vị Dân quân tự vệ thuộc quyền.
2. Nhiệm vụ
a) Chỉ huy đơn vị Dân quân tự vệ thuộc quyền chấp hành chủ trương, đường lối của
Đảng, chính sách, pháp luật của Nhà nước, nghị quyết lãnh đạo của đảng ủy (chi
bộ), sự quản lý, điều hành của Ủy ban nhân dân các cấp hoặc đảng ủy (chi bộ),
người đứng đầu cơ quan, tổ chức; chỉ thị, mệnh lệnh của người chỉ huy cấp trên
theo phân cấp quản lý;
b) Nắm vững tình hình mọi mặt, lập kế hoạch, trình cấp có thẩm quyền phê duyệt;
tổ chức thực hiện nhiệm vụ xây dựng, huấn luyện, hoạt động sẵn sàng chiến đấu,
chiến đấu, phục vụ chiến đấu, phòng thủ dân sự và chế độ, chính sách của đơn vị
Dân quân tự vệ thuộc quyền;
c) Đăng ký, quản lý, nắm tình hình chính trị, tư tưởng, trình độ, năng lực công
tác của các chức vụ chỉ huy và chiến sĩ Dân quân tự vệ thuộc quyền;
d) Tiểu đoàn trưởng, hải đoàn trưởng, đại đội trưởng, hải đội trưởng phối hợp
với chính trị viên cùng cấp tiến hành công tác đảng, công tác chính trị cho đơn
vị mình;
đ) Kiểm tra, phối hợp kiểm tra, sơ kết, tổng kết, báo cáo theo quy định.
3. Mối quan hệ công tác
a) Quan hệ với cấp ủy (chi bộ) cấp trên và cấp ủy (chi bộ) cùng cấp là quan hệ
phục tùng sự lãnh đạo, chỉ đạo về công tác Dân quân tự vệ;
b) Quan hệ với cơ quan quân sự địa phương cấp tỉnh, cấp huyện, cấp xã, ban chỉ
huy quân sự cơ quan, tổ chức theo phân cấp quản lý là quan hệ phục tùng sự chỉ
đạo, chỉ huy, quản lý điều hành về công tác Dân quân tự vệ;
c) Quan hệ với người chỉ huy, chính ủy, chính trị viên cấp trên là quan hệ giữa
cấp dưới và cấp trên;
d) Quan hệ với chính trị viên đơn vị Dân quân tự vệ cùng cấp là quan hệ phối hợp
công tác;
đ) Quan hệ với cơ quan, tổ chức, đơn vị đứng chân hoặc hoạt động trên địa bàn
là quan hệ phối hợp công tác;
e) Quan hệ với chỉ huy đơn vị Dân quân tự vệ thuộc quyền là quan hệ cấp trên và
cấp dưới."'
- '“Điều 55. Thuận tình ly hôn
Trong trường hợp vợ chồng cùng yêu cầu ly hôn, nếu xét thấy hai bên thật sự tự
nguyện ly hôn và đã thỏa thuận về việc chia tài sản, việc trông nom, nuôi dưỡng,
chăm sóc, giáo dục con trên cơ sở bảo đảm quyền lợi chính đáng của vợ và con thì
Tòa án công nhận thuận tình ly hôn; nếu không thỏa thuận được hoặc có thỏa thuận
nhưng không bảo đảm quyền lợi chính đáng của vợ và con thì Tòa án giải quyết việc
ly hôn.”'
- 'Doanh nghiệp quản lý, thanh lý tài sản
1. Các loại doanh nghiệp sau đây được hành nghề quản lý, thanh lý tài sản trong
quá trình giải quyết phá sản:
a) Công ty hợp danh;
b) Doanh nghiệp tư nhân.
2. Điều kiện để doanh nghiệp hành nghề quản lý, thanh lý tài sản:
a) Công ty hợp danh có tối thiểu hai thành viên hợp danh là Quản tài viên, Tổng
giám đốc hoặc Giám đốc của công ty hợp danh là Quản tài viên;
b) Doanh nghiệp tư nhân có chủ doanh nghiệp là Quản tài viên, đồng thời là Giám
đốc.
3. Chính phủ quy định chi tiết việc hành nghề quản lý, thanh lý tài sản và việc
quản lý nhà nước đối với doanh nghiệp quản lý, thanh lý tài sản.'
- source_sentence: Người chịu trách nhiệm chuyên môn về dược của cơ sở bán buôn thuốc
dược liệu phải có những văn bằng nào?
sentences:
- 'Phiên họp Tổ đại biểu Quốc hội
1. Tại mỗi kỳ họp Quốc hội, Ủy ban Thường vụ Quốc hội thành lập Tổ đại biểu Quốc
hội, chỉ định Tổ trưởng, Phó Tổ trưởng Tổ đại biểu Quốc hội.
2. Tổ trưởng Tổ đại biểu Quốc hội chủ tọa phiên họp Tổ. Trường hợp Tổ trưởng vắng
mặt thì Phó Tổ trưởng được phân công chủ tọa phiên họp.
3. Tổng Thư ký Quốc hội phân công thư ký phiên họp Tổđại biểu Quốc hội.
4. Trình tự phiên họp Tổ đại biểu Quốc hội được tiến hành như sau:
a) Chủ tọa nêu nội dung đề nghị đại biểu Quốc hội tập trung thảo luận;
b) Đại biểu Quốc hội phát biểu ý kiến;
c) Chủ tọa phát biểu kết thúc phiên họp.Các hình thức làm việc tại kỳ họp Quốc
hội
...
4. Các phiên họp Đoàn đại biểu Quốc hội, Tổ đại biểu Quốc hội thảo luận về các
nội dung thuộc chương trình kỳ họp.
...'
- '1. Mức phụ cấp
a) Mức phụ cấp 25% áp dụng đối với nhà giáo đang trực tiếp giảng dạy trong các
trường đại học, cao đẳng, các học viện, trường bồi dưỡng của các Bộ, cơ quan ngang
Bộ, cơ quan thuộc Chính phủ, tổ chức Đảng, tổ chức chính trị - xã hội ở Trung
ương và các trường chính trị của các tỉnh, thành phố trực thuộc Trung ương (trừ
nhà giáo giảng dạy trong các trường sư phạm, khoa sư phạm và nhà giáo dạy môn
khoa học Mác - Lênin, Tư tưởng Hồ Chí Minh);
b) Mức phụ cấp 30% áp dụng đối với nhà giáo đang trực tiếp giảng dạy trong các
trường trung học cơ sở, trung học phổ thông, trung tâm kỹ thuật tổng hợp - hướng
nghiệp, trung tâm giáo dục thường xuyên, trung tâm dạy nghề ở đồng bằng, thành
phố, thị xã; trường trung học chuyên nghiệp, trường dạy nghề; các trung tâm bồi
dưỡng chính trị của huyện, quận, thị xã, thành phố trực thuộc tỉnh;
c) Mức phụ cấp 35% áp dụng đối với nhà giáo đang trực tiếp giảng dạy trong các
trường mầm non, tiểu học ở đồng bằng, thành phố, thị xã; các trường trung học
cơ sở, trung học phổ thông, các trung tâm kỹ thuật tổng hợp - hướng nghiệp, trung
tâm giáo dục thường xuyên, trung tâm dạy nghề ở miền núi, hải đảo, vùng sâu, vùng
xa;
d) Mức phụ cấp 40% áp dụng đối với nhà giáo đang trực tiếp giảng dạy trong các
trường sư phạm, khoa sư phạm (đại học, cao đẳng, trung học), trường cán bộ quản
lý giáo dục và đào tạo và nhà giáo dạy môn chính trị trong các trường trung học
chuyên nghiệp, trường dạy nghề;
đ) Mức phụ cấp 45% áp dụng đối với nhà giáo đang trực tiếp giảng dạy các môn khoa
học Mác - Lênin, Tư tưởng Hồ Chí Minh trong các trường đại học, cao đẳng;
e) Mức phụ cấp 50% áp dụng đối với nhà giáo đang trực tiếp giảng dạy trong các
trường mầm non, tiểu học ở miền núi, hải đảo, vùng sâu, vùng xa.
Việc xác định địa bàn miền núi thực hiện theo quy định của Uỷ ban Dân tộc; địa
bàn hải đảo theo thực tế địa lý; địa bàn vùng sâu, vùng xa tuỳ theo đặc điểm của
từng địa phương do Uỷ ban nhân dân tỉnh hướng dẫn sau khi có ý kiến thống nhất
của Liên Bộ.
2. Cách tính
Mức phụ cấp ưu đãi được hưởng = Mức lương tối thiểu chung x [hệ số lương theo
ngạch, bậc hiện hưởng + hệ số phụ cấp chức vụ lãnh đạo (nếu có) + % (quy theo
hệ số) phụ cấp thâm niên vượt khung (nếu có)] x tỷ lệ % phụ cấp ưu đãi.'
- 'Điều kiện đối với người chịu trách nhiệm chuyên môn về dược của cơ sở bán buôn
thuốc, nguyên liệu làm thuốc
1. Người chịu trách nhiệm chuyên môn về dược của cơ sở bán buôn thuốc, nguyên
liệu làm thuốc phải có văn bằng chuyên môn quy định tại điểm a khoản 1 Điều 13
của Luật này và có 02 năm thực hành chuyên môn tại cơ sở dược phù hợp, trừ trường
hợp quy định tại khoản 2 và khoản 3 Điều này.
2. Người chịu trách nhiệm chuyên môn về dược của cơ sở bán buôn vắc xin, sinh
phẩm phải có một trong các văn bằng chuyên môn quy định tại điểm a, b hoặc d khoản
1 Điều 13 của Luật này và có 02 năm thực hành chuyên môn tại cơ sở dược phù hợp.
3. Người chịu trách nhiệm chuyên môn về dược của cơ sở bán buôn dược liệu, thuốc
dược liệu, thuốc cổ truyền phải có một trong các văn bằng chuyên môn quy định
tại điểm a, c, i hoặc l khoản 1 Điều 13 của Luật này và có 02 năm thực hành chuyên
môn tại cơ sở dược phù hợp, trừ trường hợp quy định tại điểm c khoản 2 Điều 13
của Luật này.'
- source_sentence: Giấy phép lái xe ô tô có được sử dụng thay thế cho giấy phép lái
xe máy trong trường hợp có yêu cầu kiểm tra từ cơ quan có thẩm quyền hay không?
sentences:
- 'Vi phạm quy định về tiêu chuẩn đủ điều kiện bay
...
4. Phạt tiền từ 80.000.000 đồng (tám mươi triệu đồng) đến 100.000.000 đồng (một
trăm triệu đồng) đối với hành vi đưa tàu bay vào khai thác mà không có Giấy chứng
nhận đủ điều kiện bay.
...Nguyên tắc áp dụng
1. Mức phạt tiền quy định tại Chương II Nghị định này là mức phạt tiền áp dụng
đối với các tổ chức, trừ mức phạt tiền quy định tại khoản 1, 2, 3, 4 Điều 6; điểm
i, k khoản 1 Điều 7; khoản 1, 2, 3, 4, 5 Điều 8; khoản 1, 2, 4, 5, 6 Điều 9; khoản
1, 2 và điểm a, b khoản 5 Điều 10; khoản 1, 2, 3, 4 và điểm g khoản 5 Điều 11;
khoản 1 Điều 12; điểm b, c khoản 1 và điểm a, c khoản 2 Điều 14; khoản 1, 2 và
điểm a, d, đ khoản 3, khoản 4, 5 Điều 15; khoản 1, 2, 3, 4, 5, 6 Điều 16; khoản
1, 2 Điều 17; khoản 1 và điểm a, b, d khoản 2 Điều 18; khoản 1, 2 Điều 19; khoản
1, 2, 3, 4, 5, 6 Điều 21; khoản 1, 2 Điều 24; khoản 1, 2, 3 Điều 25; khoản 1,
2, 3, 4, 5, 6, 7, 8 Điều 26; điểm a, b, đ khoản 1 Điều 27; khoản 1, 2, 3 và điểm
a khoản 4, điểm b khoản 5 Điều 28; khoản 1, 2, 3 Điều 30 Nghị định này là mức
phạt áp dụng đối với cá nhân. Đối với cùng một hành vi vi phạm hành chính thì
mức phạt tiền đối với tổ chức bằng hai lần mức phạt tiền đối với cá nhân.
...'
- 'Phê duyệt Phương án khai thác thực vật rừng thông thường
...
2. Cơ quan có thẩm quyền phê duyệt:
a) Bộ Nông nghiệp và Phát triển nông thôn phê duyệt Phương án khai thác đối với
trường hợp quy định tại các điểm a, b, c, d và đ khoản 1 Điều này đối với diện
tích rừng do Bộ Nông nghiệp và Phát triển nông thôn quản lý;
b) Ủy ban nhân dân cấp huyện phê duyệt Phương án khai thác đối với trường hợp
quy định tại điểm đ khoản 1 Điều này do cá nhân, hộ gia đình, cộng đồng dân cư
tự đầu tư; khai thác tận dụng, tận thu gỗ rừng sản xuất là rừng tự nhiên do cá
nhân, hộ gia đình, cộng đồng dân cư quản lý;
c) Sở Nông nghiệp và Phát triển nông thôn phê duyệt Phương án khai thác đối với
trường hợp không thuộc quy định tại điểm a và điểm b khoản này.
...'
- '"Điều 58. Điều kiện của người lái xe tham gia giao thông
1. Người lái xe tham gia giao thông phải đủ độ tuổi, sức khoẻ quy định tại Điều
60 của Luật này và có giấy phép lái xe phù hợp với loại xe được phép điều khiển
do cơ quan nhà nước có thẩm quyền cấp.
.."
"Điều 59. Giấy phép lái xe
1. Căn cứ vào kiểu loại, công suất động cơ, tải trọng và công dụng của xe cơ giới,
giấy phép lái xe được phân thành giấy phép lái xe không thời hạn và giấy phép
lái xe có thời hạn.
2. Giấy phép lái xe không thời hạn bao gồm các hạng sau đây:
a) Hạng A1 cấp cho người lái xe mô tô hai bánh có dung tích xi-lanh từ 50 cm3
đến dưới 175 cm3;
b) Hạng A2 cấp cho người lái xe mô tô hai bánh có dung tích xi-lanh từ 175 cm3
trở lên và các loại xe quy định cho giấy phép lái xe hạng A1;
c) Hạng A3 cấp cho người lái xe mô tô ba bánh, các loại xe quy định cho giấy phép
lái xe hạng A1 và các xe tương tự.
...
4. Giấy phép lái xe có thời hạn gồm các hạng sau đây:
a) Hạng A4 cấp cho người lái máy kéo có trọng tải đến 1.000 kg;
b) Hạng B1 cấp cho người không hành nghề lái xe điều khiển xe ô tô chở người đến
9 chỗ ngồi; xe ô tô tải, máy kéo có trọng tải dưới 3.500 kg;
c) Hạng B2 cấp cho người hành nghề lái xe điều khiển xe ô tô chở người đến 9 chỗ
ngồi; xe ô tô tải, máy kéo có trọng tải dưới 3.500 kg;
d) Hạng C cấp cho người lái xe ô tô tải, máy kéo có trọng tải từ 3.500 kg trở
lên và các loại xe quy định cho các giấy phép lái xe hạng B1, B2;
đ) Hạng D cấp cho người lái xe ô tô chở người từ 10 đến 30 chỗ ngồi và các loại
xe quy định cho các giấy phép lái xe hạng B1, B2, C;
e) Hạng E cấp cho người lái xe ô tô chở người trên 30 chỗ ngồi và các loại xe
quy định cho các giấy phép lái xe hạng B1, B2, C, D;
g) Giấy phép lái xe hạng FB2, FD, FE cấp cho người lái xe đã có giấy phép lái
xe hạng B2, D, E để lái các loại xe quy định cho các giấy phép lái xe hạng này
khi kéo rơ moóc hoặc xe ô tô chở khách nối toa; hạng FC cấp cho người lái xe đã
có giấy phép lái xe hạng C để lái các loại xe quy định cho hạng C khi kéo rơ moóc,
đầu kéo kéo sơ mi rơ moóc."'
- source_sentence: Tiêu chí xếp loại chất lượng công chức ở mức không hoàn thành nhiệm
vụ được quy định ra sao?
sentences:
- 'Nhiệm vụ:
1. Hội tập hợp các nghệ sĩ hoạt động thuộc các bộ môn, chuyên ngành sân khấu,
nhằm tạo ra sức mạnh tổng hợp để xây dựng và phát triển nền sân khấu Việt Nam
tiên tiến đậm đà bản sắc dân tộc theo định hướng phát triển văn hóa nghệ thuật
của Đảng. Hội tạo điều kiện cho Hội viên học tập chính trị, nâng cao nghiệp vụ
nắm vững định hướng sáng tạo văn học nghệ thuật.
2. Hội cố gắng tạo điều kiện thuận lợi để các nghệ sĩ hoạt động sân khấu chủ động
sáng tạo những vở diễn có giá trị cao về tư tưởng và nghệ thuật, đồng thời khuyến
khích sự phát triển ngành phê bình và nghiên cứu sân khấu. Tham gia nghiên cứu
các đề tài khoa học về nghệ thuật sân khấu.
3. Hội thường xuyên phối kết hợp với các cơ quan chuyên môn của Bộ Văn hóa Thông
tin để xây dựng những đơn vị sân khấu vững mạnh, hoạt động có hiệu quả, đồng thời
khuyến khích, giúp đỡ các tiết mục thử nghiệm, tìm tòi các hình thức sáng tạo
mới để rút kinh nghiệm.
4. Khuyến khích và giúp đỡ bằng nhiều hình thức đối với những hoạt động của sân
khấu không chuyên nghiệp.
5. Theo dõi, phát hiện kịp thời, phản ánh với Đảng, Nhà nước đối với các hiện
tượng sân khấu mà dư luận xã hội quan tâm và quá trình phát triển của nghệ thuật
sân khấu Việt Nam.
6. Củng cố, mở rộng quan hệ hợp tác với các nước để trao đổi, giới thiệu học tập
kinh nghiệm về nghệ thuật sân khấu theo quy định của pháp luật.
...'
- 'Tiêu chí xếp loại chất lượng công chức ở mức không hoàn thành nhiệm vụ
1. Công chức không giữ chức vụ lãnh đạo, quản lý có một trong các tiêu chí sau
đây thì xếp loại chất lượng ở mức không hoàn thành nhiệm vụ:
a) Có biểu hiện suy thoái về tư tưởng chính trị, đạo đức, lối sống, tự diễn biến,
tự chuyển hóa theo đánh giá của cấp có thẩm quyền;
b) Có trên 50% các tiêu chí về kết quả thực hiện nhiệm vụ theo quy định của pháp
luật, theo kế hoạch đề ra hoặc theo công việc cụ thể được giao chưa bảo đảm tiến
độ, chất lượng, hiệu quả;
c) Có hành vi vi phạm trong quá trình thực thi nhiệm vụ bị xử lý kỷ luật trong
năm đánh giá.
2. Công chức giữ chức vụ lãnh đạo, quản lý có một trong các tiêu chí sau đây thì
xếp loại chất lượng ở mức không hoàn thành nhiệm vụ:
a) Có biểu hiện suy thoái về tư tưởng chính trị, đạo đức, lối sống, tự diễn biến,
tự chuyển hóa theo đánh giá của cấp có thẩm quyền;
b) Có trên 50% các tiêu chí về kết quả thực hiện nhiệm vụ theo quy định của pháp
luật, theo kế hoạch đề ra hoặc theo công việc cụ thể được giao chưa bảo đảm tiến
độ, chất lượng, hiệu quả;
c) Cơ quan, tổ chức, đơn vị hoặc lĩnh vực công tác được giao phụ trách hoàn thành
dưới 50% các chỉ tiêu, nhiệm vụ;
d) Cơ quan, tổ chức, đơn vị thuộc thẩm quyền phụ trách, quản lý trực tiếp liên
quan đến tham ô, tham nhũng, lãng phí và bị xử lý theo quy định của pháp luật.
đ) Có hành vi vi phạm trong quá trình thực thi nhiệm vụ bị xử lý kỷ luật trong
năm đánh giá.'
- "Giao dịch lô lẻ\n1. Giao dịch lô lẻ được thực hiện theo phương thức khớp lệnh\
\ và phương thức thỏa thuận trên hệ thống giao dịch.\n2. Nhà đầu tư chỉ được phép\
\ nhập lệnh LO đối với giao dịch lô lẻ \n3. Đơn vị giao dịch lô lẻ là 01 cổ phiếu\
\ hoặc chứng chỉ quỹ hoặc chứng quyền có bảo đảm.\n4. Giá giao dịch:\na) Giá của\
\ lệnh giao dịch lô lẻ phải tuân thủ theo các quy định về giá giao dịch tương\
\ tự giao dịch lô chẵn.\nb) Các lệnh giao dịch lô lẻ không được sử dụng để xác\
\ định giá tham chiếu, giá tính chỉ số.\n5. Giao dịch lô lẻ của cổ phiếu, chứng\
\ chỉ quỹ và chứng quyền có bảo đảm mới niêm yết hoặc giao dịch trở lại sau khi\
\ bị tạm ngừng, đình chỉ giao dịch từ 25 ngày giao dịch liên tiếp trở lên không\
\ được nhập vào hệ thống giao dịch cho đến khi có giá đóng cửa được xác lập.\n\
6. SGDCK có trách nhiệm tổ chức giao dịch lô lẻ theo các phương thức quy định\
\ tại khoản 2 Điều 13 Quy chế này."
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 86741b4e3f5cb7765a600d3a3d55a0f6a6cb443d -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Savoxism/Finetuned-Paraphrase-Multilingual-MiniLM-L12-v2")
# Run inference
sentences = [
'Tiêu chí xếp loại chất lượng công chức ở mức không hoàn thành nhiệm vụ được quy định ra sao?',
'Tiêu chí xếp loại chất lượng công chức ở mức không hoàn thành nhiệm vụ\n1. Công chức không giữ chức vụ lãnh đạo, quản lý có một trong các tiêu chí sau đây thì xếp loại chất lượng ở mức không hoàn thành nhiệm vụ:\na) Có biểu hiện suy thoái về tư tưởng chính trị, đạo đức, lối sống, tự diễn biến, tự chuyển hóa theo đánh giá của cấp có thẩm quyền;\nb) Có trên 50% các tiêu chí về kết quả thực hiện nhiệm vụ theo quy định của pháp luật, theo kế hoạch đề ra hoặc theo công việc cụ thể được giao chưa bảo đảm tiến độ, chất lượng, hiệu quả;\nc) Có hành vi vi phạm trong quá trình thực thi nhiệm vụ bị xử lý kỷ luật trong năm đánh giá.\n2. Công chức giữ chức vụ lãnh đạo, quản lý có một trong các tiêu chí sau đây thì xếp loại chất lượng ở mức không hoàn thành nhiệm vụ:\na) Có biểu hiện suy thoái về tư tưởng chính trị, đạo đức, lối sống, tự diễn biến, tự chuyển hóa theo đánh giá của cấp có thẩm quyền;\nb) Có trên 50% các tiêu chí về kết quả thực hiện nhiệm vụ theo quy định của pháp luật, theo kế hoạch đề ra hoặc theo công việc cụ thể được giao chưa bảo đảm tiến độ, chất lượng, hiệu quả;\nc) Cơ quan, tổ chức, đơn vị hoặc lĩnh vực công tác được giao phụ trách hoàn thành dưới 50% các chỉ tiêu, nhiệm vụ;\nd) Cơ quan, tổ chức, đơn vị thuộc thẩm quyền phụ trách, quản lý trực tiếp liên quan đến tham ô, tham nhũng, lãng phí và bị xử lý theo quy định của pháp luật.\nđ) Có hành vi vi phạm trong quá trình thực thi nhiệm vụ bị xử lý kỷ luật trong năm đánh giá.',
'Nhiệm vụ:\n1. Hội tập hợp các nghệ sĩ hoạt động thuộc các bộ môn, chuyên ngành sân khấu, nhằm tạo ra sức mạnh tổng hợp để xây dựng và phát triển nền sân khấu Việt Nam tiên tiến đậm đà bản sắc dân tộc theo định hướng phát triển văn hóa nghệ thuật của Đảng. Hội tạo điều kiện cho Hội viên học tập chính trị, nâng cao nghiệp vụ nắm vững định hướng sáng tạo văn học nghệ thuật.\n2. Hội cố gắng tạo điều kiện thuận lợi để các nghệ sĩ hoạt động sân khấu chủ động sáng tạo những vở diễn có giá trị cao về tư tưởng và nghệ thuật, đồng thời khuyến khích sự phát triển ngành phê bình và nghiên cứu sân khấu. Tham gia nghiên cứu các đề tài khoa học về nghệ thuật sân khấu.\n3. Hội thường xuyên phối kết hợp với các cơ quan chuyên môn của Bộ Văn hóa Thông tin để xây dựng những đơn vị sân khấu vững mạnh, hoạt động có hiệu quả, đồng thời khuyến khích, giúp đỡ các tiết mục thử nghiệm, tìm tòi các hình thức sáng tạo mới để rút kinh nghiệm.\n4. Khuyến khích và giúp đỡ bằng nhiều hình thức đối với những hoạt động của sân khấu không chuyên nghiệp.\n5. Theo dõi, phát hiện kịp thời, phản ánh với Đảng, Nhà nước đối với các hiện tượng sân khấu mà d\xadư luận xã hội quan tâm và quá trình phát triển của nghệ thuật sân khấu Việt Nam.\n6. Củng cố, mở rộng quan hệ hợp tác với các nước để trao đổi, giới thiệu học tập kinh nghiệm về nghệ thuật sân khấu theo quy định của pháp luật.\n...',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 89,592 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 24.66 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 252.25 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Quy trình thực hiện việc sửa đổi quyết định thanh tra liên quan đến nội dung thanh tra theo đề nghị của Đoàn thanh tra được quy định như thế nào?</code> | <code>Sửa đổi, bổ sung quyết định thanh tra liên quan đến đối tượng thanh tra, nội dung thanh tra<br>...<br>4. Sửa đổi, bổ sung quyết định thanh tra liên quan đến nội dung thanh tra, đối tượng thanh tra theo đề nghị của Đoàn thanh tra:<br>a) Khi có căn cứ sửa đổi, bổ sung nội dung thanh tra, đối tượng thanh tra của quyết định thanh tra quy định tại khoản 2 Điều này, Đoàn thanh tra thảo luận về đề nghị sửa đổi, bổ sung nội dung quyết định thanh tra, đối tượng thanh tra. Các ý kiến khác nhau phải được Trưởng đoàn thanh tra báo cáo đầy đủ với người ra quyết định thanh tra;<br>b) Trưởng đoàn thanh tra thay mặt Đoàn thanh tra có văn bản đề nghị người ra quyết định thanh tra xem xét, quyết định việc sửa đổi, bổ sung nội dung quyết định thanh tra. Văn bản đề nghị sửa đổi, bổ sung quyết định thanh tra phải nêu rõ lý do, nội dung sửa đổi, bổ sung và những nội dung khác có liên quan để người ra quyết định thanh tra xem xét, quyết định. Ý kiến của người ra quyết định thanh tra phải thể hiện bằng văn bản;<br>c) Trường hợp người ra quyết định thanh tra phê duyệt việc sửa đổi, bổ sung nội dung thanh tra, đối tượng thanh tra của quyết định thanh tra thì người ra quyết định thanh tra có quyết định sửa đổi, bổ sung quyết định thanh tra yêu cầu Trưởng đoàn thanh tra thực hiện theo quyết định thanh tra sửa đổi, bổ sung.<br>Trưởng đoàn thanh tra có trách nhiệm thông báo nội dung sửa đổi, bổ sung quyết định thanh tra cho các thành viên Đoàn thanh tra; xây dựng kế hoạch tiến hành thanh tra sửa đổi, bổ sung và tổ chức triển khai thực hiện.<br>...</code> |
| <code>Ủy ban nhân dân cấp tỉnh có quyền phê duyệt phương án khai thác tận dụng gỗ loài thực vật rừng thông thường từ rừng tự nhiên hay không?</code> | <code>Phê duyệt Phương án khai thác thực vật rừng thông thường<br>...<br>2. Cơ quan có thẩm quyền phê duyệt:<br>a) Bộ Nông nghiệp và Phát triển nông thôn phê duyệt Phương án khai thác đối với trường hợp quy định tại các điểm a, b, c, d và đ khoản 1 Điều này đối với diện tích rừng do Bộ Nông nghiệp và Phát triển nông thôn quản lý;<br>b) Ủy ban nhân dân cấp huyện phê duyệt Phương án khai thác đối với trường hợp quy định tại điểm đ khoản 1 Điều này do cá nhân, hộ gia đình, cộng đồng dân cư tự đầu tư; khai thác tận dụng, tận thu gỗ rừng sản xuất là rừng tự nhiên do cá nhân, hộ gia đình, cộng đồng dân cư quản lý;<br>c) Sở Nông nghiệp và Phát triển nông thôn phê duyệt Phương án khai thác đối với trường hợp không thuộc quy định tại điểm a và điểm b khoản này.<br>...</code> |
| <code>Mức phụ cấp lưu trú cho người đi công tác thuộc Bộ Quốc phòng được quy định như thế nào?</code> | <code>Phụ cấp lưu trú<br>Phụ cấp lưu trú là khoản tiền hỗ trợ thêm cho người đi công tác ngoài tiền lương do cơ quan, đơn vị cử đi công tác chi trả, được tính từ ngày bắt đầu đi công tác đến khi kết thúc đợt công tác trở về cơ quan, đơn vị (bao gồm thời gian đi trên đường, thời gian lưu trú tại nơi đến công tác). Mức phụ cấp lưu trú như sau:<br>1. Mức 200.000 đồng/ngày: Áp dụng đối với thời gian đi trên đường từ 5 giờ/ngày trở lên hoặc từ 150 km/ngày trở lên đối với khu vực vùng sâu, miền núi đi lại khó khăn và 300 km/ngày trở lên đối với khu vực còn lại.<br>2. Mức 100.000 đồng/ngày: Áp dụng đối với thời gian lưu trú tại cơ quan, đơn vị nơi đến công tác.<br>3. Mức 250.000 đồng/ngày: Áp dụng đối với thời gian đi công tác thực tế trên biển của quân nhân, công nhân quốc phòng, viên chức quốc phòng, công chức quốc phòng đang công tác, làm việc ở đất liền được cử đi công tác trên biển, đảo.<br>4. Đối với trường hợp đi và về trong ngày nếu không đủ điều kiện quy định tại khoản 1 Điều này thì được áp dụng phụ cấp lưu trú quy định tại khoản 2 Điều này với điều kiện thời gian làm việc tại đơn vị và thời gian đi, về tối thiểu từ 5 giờ trở lên.<br>5. Đối với quân nhân, công nhân quốc phòng, viên chức quốc phòng, công chức quốc phòng khi làm nhiệm vụ (huấn luyện, chiến đấu, tuần tra, cứu nạn, vận chuyển và các nhiệm vụ khác) trên tàu chiến đấu các loại, tàu cảnh sát biển, tàu kiểm ngư, tàu tìm kiếm cứu hộ, cứu nạn trên biển, tàu vận tải phục vụ trên biển thì những ngày thực tế đi biển được hưởng chế độ bồi dưỡng đi biển, phụ cấp ngày đi biển và phụ cấp đặc thù đi biển theo quy định (không được hưởng chế độ phụ cấp lưu trú quy định tại khoản 3 Điều này).</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.7143 | 500 | 0.4527 |
| 1.4286 | 1000 | 0.1506 |
| 2.1429 | 1500 | 0.1119 |
| 2.8571 | 2000 | 0.0907 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Ifraaaa/ilab-granite
|
Ifraaaa
| 2025-03-14T11:37:48Z | 0 | 0 | null |
[
"gguf",
"llama",
"granite",
"ibm",
"lab",
"labrador",
"labradorite",
"en",
"base_model:instructlab/granite-7b-lab",
"base_model:quantized:instructlab/granite-7b-lab",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T10:14:41Z |
---
tags:
- granite
- ibm
- lab
- labrador
- labradorite
license: apache-2.0
language:
- en
base_model: instructlab/granite-7b-lab
quantized_by: IBM Research
---
# Granite 7b - GGUF
4-bit quantized version of [instructlab/granite-7b-lab](https://huggingface.co/instructlab/granite-7b-lab)
|
N-Bot-Int/OpenElla3-Llama3.2-Lora-Backup
|
N-Bot-Int
| 2025-03-14T11:36:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-13T22:54:46Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** N-Bot-Int
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IPPATAPUVENKATASRICHANDRA/whishper
|
IPPATAPUVENKATASRICHANDRA
| 2025-03-14T11:36:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-14T09:10:38Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: whishper
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ta
split: test
args: ta
metrics:
- name: Wer
type: wer
value: 72.24880382775119
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whishper
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5474
- Wer: 72.2488
- Cer: 29.9605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 0.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.2442 | 0.0333 | 5 | 0.8071 | 140.3509 | 157.0811 |
| 0.2386 | 0.0667 | 10 | 0.7964 | 146.2520 | 136.7877 |
| 0.3848 | 0.1 | 15 | 0.7687 | 146.8900 | 111.5479 |
| 0.3015 | 0.1333 | 20 | 0.7213 | 157.0973 | 126.8761 |
| 0.2178 | 0.1667 | 25 | 0.6916 | 159.1707 | 144.8561 |
| 0.2314 | 0.2 | 30 | 0.6551 | 149.6013 | 125.3526 |
| 0.2112 | 0.2333 | 35 | 0.6239 | 99.3620 | 64.2844 |
| 0.1571 | 0.2667 | 40 | 0.5794 | 76.5550 | 35.1514 |
| 0.1934 | 0.3 | 45 | 0.5547 | 73.0463 | 33.7596 |
| 0.3231 | 0.3333 | 50 | 0.5474 | 72.2488 | 29.9605 |
| 0.1035 | 0.3667 | 55 | 0.5434 | 72.5678 | 32.3491 |
| 0.1991 | 0.4 | 60 | 0.5454 | 74.0032 | 31.4275 |
| 0.196 | 0.4333 | 65 | 0.5495 | 73.5247 | 36.0166 |
| 0.4541 | 0.4667 | 70 | 0.5448 | 73.3652 | 38.5556 |
| 0.2166 | 0.5 | 75 | 0.5418 | 73.3652 | 39.3455 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
AsccendiaAI/v1-5-pruned-emaonly.ckpt
|
AsccendiaAI
| 2025-03-14T11:34:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-03-14T11:21:48Z |
---
license: creativeml-openrail-m
---
|
JacksonBrune/11905b3c-16d9-4b8f-80cd-dae7def606ce
|
JacksonBrune
| 2025-03-14T11:32:58Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"region:us"
] | null | 2025-03-14T11:32:43Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/gemma-2b-it
model-index:
- name: JacksonBrune/11905b3c-16d9-4b8f-80cd-dae7def606ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/11905b3c-16d9-4b8f-80cd-dae7def606ce
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Ahmeshen/a2c-PandaReachDense-v3
|
Ahmeshen
| 2025-03-14T11:32:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-14T11:28:07Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.17 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MeiKing111/SN09_COM4_114
|
MeiKing111
| 2025-03-14T11:30:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-13T16:23:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-dpo-lora
|
Lunzima
| 2025-03-14T11:29:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft",
"base_model:finetune:Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:29:25Z |
---
base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Lunzima
- **License:** apache-2.0
- **Finetuned from model :** Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MrRobotoAI/301
|
MrRobotoAI
| 2025-03-14T11:29:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:MrRobotoAI/Loki-v4.1-8b-EROTICA-128K",
"base_model:merge:MrRobotoAI/Loki-v4.1-8b-EROTICA-128K",
"base_model:MrRobotoAI/MrRoboto-HORNY-v2-8b-128k",
"base_model:merge:MrRobotoAI/MrRoboto-HORNY-v2-8b-128k",
"base_model:MrRobotoAI/MrRoboto-ROMANCE-v2-8b-128K",
"base_model:merge:MrRobotoAI/MrRoboto-ROMANCE-v2-8b-128K",
"base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k",
"base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k",
"base_model:MrRobotoAI/Thor-v2.5-8b-FANTASY-FICTION-128K",
"base_model:merge:MrRobotoAI/Thor-v2.5-8b-FANTASY-FICTION-128K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:25:00Z |
---
base_model:
- MrRobotoAI/Thor-v2.5-8b-FANTASY-FICTION-128K
- MrRobotoAI/Loki-v4.1-8b-EROTICA-128K
- MrRobotoAI/MrRoboto-ROMANCE-v2-8b-128K
- MrRobotoAI/Nord-8b-Uncensored-BASE-128k
- MrRobotoAI/MrRoboto-HORNY-v2-8b-128k
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/Thor-v2.5-8b-FANTASY-FICTION-128K](https://huggingface.co/MrRobotoAI/Thor-v2.5-8b-FANTASY-FICTION-128K)
* [MrRobotoAI/Loki-v4.1-8b-EROTICA-128K](https://huggingface.co/MrRobotoAI/Loki-v4.1-8b-EROTICA-128K)
* [MrRobotoAI/MrRoboto-ROMANCE-v2-8b-128K](https://huggingface.co/MrRobotoAI/MrRoboto-ROMANCE-v2-8b-128K)
* [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k)
* [MrRobotoAI/MrRoboto-HORNY-v2-8b-128k](https://huggingface.co/MrRobotoAI/MrRoboto-HORNY-v2-8b-128k)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/Thor-v2.5-8b-FANTASY-FICTION-128K
- model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k
- model: MrRobotoAI/MrRoboto-HORNY-v2-8b-128k
- model: MrRobotoAI/MrRoboto-ROMANCE-v2-8b-128K
- model: MrRobotoAI/Loki-v4.1-8b-EROTICA-128K
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
|
AndVilches/ppo-SnowballTarget
|
AndVilches
| 2025-03-14T11:28:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-03-14T11:28:16Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AndVilches/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
juhw/uiop99
|
juhw
| 2025-03-14T11:28:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:24:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alepach/notHumpback-M1
|
Alepach
| 2025-03-14T11:26:29Z | 132 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:OpenAssistant/oasst1",
"dataset:allenai/c4",
"arxiv:2308.06259",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-31T12:48:01Z |
---
base_model: meta-llama/Llama-3.2-3B
library_name: transformers
model_name: notHumpback-M1
tags:
- generated_from_trainer
- trl
- sft
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- allenai/c4
---
# notHumpback-M1
This model follows the Humpback architecture, proposed in the paper [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06259)
by Li et al.
It represents the resulting model after the first iteration of self-curation, which is trained on a small amount of gold data
and a set of generated data curated by the ["seed model"](https://huggingface.co/Alepach/notHumpback-M0).
This model can be used for instruction-following.
It may also be used to, again, score the instruction-response pairs
generated by the ["backward model"](https://huggingface.co/Alepach/notHumpback-Myx) for a second iteration of self-curation.
Humpback uses instruction backtranslation on a web corpus to generate input-output pairs (self-augmentation),
creating a richer dataset for fine-tuning models without the need for additional manual annotation.
The model then iteratively curates the created dataset, scoring the pairs by quality, and is then finetuned on the resulting subset
of all pairs with the highest possible score (self-curation).
Varying from the original paper, this model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
The dataset used to train this model is a combination of data sampled from the [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
dataset and the synthetic dataset which was mentioned above. The latter has been created by applying self-augmentation and self-curation
on 502k entries from the english subset ("en") of the [c4](https://huggingface.co/datasets/allenai/c4) dataset.
For comparison with other methods, the training dataset was limited to 16000 instruction-response pairs.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Original paper:
```bibtex
@misc{li2023selfalignment,
title={Self-Alignment with Instruction Backtranslation},
author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
year={2023},
eprint={2308.06259},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Alphatao/4d612692-eb9d-4c69-923c-87a7eec226aa
|
Alphatao
| 2025-03-14T11:26:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T07:18:12Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4d612692-eb9d-4c69-923c-87a7eec226aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c012462fb27f3b29_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c012462fb27f3b29_train_data.json
type:
field_input: alt_text
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/4d612692-eb9d-4c69-923c-87a7eec226aa
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 714
micro_batch_size: 4
mlflow_experiment_name: /tmp/c012462fb27f3b29_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 74115456-db2d-400d-ac3d-17b810a93564
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 74115456-db2d-400d-ac3d-17b810a93564
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4d612692-eb9d-4c69-923c-87a7eec226aa
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 714
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.6455 | 0.0004 | 1 | 1.7017 |
| 7.1659 | 0.0428 | 100 | 0.8465 |
| 6.6308 | 0.0855 | 200 | 0.8195 |
| 6.2798 | 0.1283 | 300 | 0.8029 |
| 6.7023 | 0.1711 | 400 | 0.7886 |
| 7.0018 | 0.2139 | 500 | 0.7763 |
| 6.1658 | 0.2566 | 600 | 0.7688 |
| 5.685 | 0.2994 | 700 | 0.7669 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
padmasreeanisetti/distilbert-base-uncased-finetuned-clinc
|
padmasreeanisetti
| 2025-03-14T11:23:13Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-13T09:16:49Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8063
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3392 | 0.7313 |
| 3.8331 | 2.0 | 636 | 1.9295 | 0.8465 |
| 3.8331 | 3.0 | 954 | 1.2026 | 0.8965 |
| 1.7518 | 4.0 | 1272 | 0.8956 | 0.9113 |
| 0.944 | 5.0 | 1590 | 0.8063 | 0.9161 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Tokenizers 0.21.0
|
muratti18462/murat_nerstracth_14035e8
|
muratti18462
| 2025-03-14T11:22:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-03-14T09:36:45Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: murat_nerstracth_14035e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# murat_nerstracth_14035e8
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3363
- Precision: 0.7899
- Recall: 0.5526
- F1: 0.5941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-08
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|
| 1.8589 | 0.9999 | 8484 | 0.3972 | 0.7550 | 0.4579 | 0.4850 |
| 0.7094 | 2.0 | 16969 | 0.3484 | 0.7833 | 0.5342 | 0.5729 |
| 0.3997 | 2.9998 | 25452 | 0.3363 | 0.7899 | 0.5526 | 0.5941 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Jimmywang1230/RSICC-Transformer-CLIP-ViT-L14
|
Jimmywang1230
| 2025-03-14T11:22:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T11:16:08Z |
---
license: apache-2.0
---
|
dgambettaphd/M_gen8_run0_Meta-Llama-3.1-8B-bnb-4bit_wiki_doc1000_real64_synt64
|
dgambettaphd
| 2025-03-14T11:20:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:20:32Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
beyoru/SQL14_3.1
|
beyoru
| 2025-03-14T11:20:29Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:18:01Z |
---
base_model: unsloth/Qwen2.5-Coder-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** beyoru
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jimmywang1230/RSICC-Transformer-CLIP-ViT-B16
|
Jimmywang1230
| 2025-03-14T11:18:33Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T11:14:36Z |
---
license: apache-2.0
---
|
EmilePrs/Test
|
EmilePrs
| 2025-03-14T11:18:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-14T10:35:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EmilePrs
---
# Test
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EmilePrs` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('EmilePrs/Test', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ahmedheakl/qwqvl-r1-base
|
ahmedheakl
| 2025-03-14T11:14:35Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-13T18:28:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aghdam/Reinforce-cartpole
|
aghdam
| 2025-03-14T11:11:16Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-14T11:11:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NikolayKozloff/Light-R1-14B-DS-Q5_K_M-GGUF
|
NikolayKozloff
| 2025-03-14T11:10:58Z | 0 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:qihoo360/Light-R1-14B-DS",
"base_model:quantized:qihoo360/Light-R1-14B-DS",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T11:10:12Z |
---
base_model: qihoo360/Light-R1-14B-DS
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Light-R1-14B-DS-Q5_K_M-GGUF
This model was converted to GGUF format from [`qihoo360/Light-R1-14B-DS`](https://huggingface.co/qihoo360/Light-R1-14B-DS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/qihoo360/Light-R1-14B-DS) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Light-R1-14B-DS-Q5_K_M-GGUF --hf-file light-r1-14b-ds-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Light-R1-14B-DS-Q5_K_M-GGUF --hf-file light-r1-14b-ds-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Light-R1-14B-DS-Q5_K_M-GGUF --hf-file light-r1-14b-ds-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Light-R1-14B-DS-Q5_K_M-GGUF --hf-file light-r1-14b-ds-q5_k_m.gguf -c 2048
```
|
Inna432/chat_model-yunbora-mistral-grok2
|
Inna432
| 2025-03-14T11:10:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:nasiruddin15/Mistral-grok-instract-2-7B-slerp",
"base_model:finetune:nasiruddin15/Mistral-grok-instract-2-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:10:08Z |
---
base_model: nasiruddin15/Mistral-grok-instract-2-7B-slerp
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Inna432
- **License:** apache-2.0
- **Finetuned from model :** nasiruddin15/Mistral-grok-instract-2-7B-slerp
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xwind/q-FrozenLake-v1-4x4-noSlippery
|
xwind
| 2025-03-14T11:10:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-14T11:10:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="xwind/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
AsccendiaAI/table-diffusion-v1-5
|
AsccendiaAI
| 2025-03-14T11:10:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-03-14T10:19:10Z |
---
license: creativeml-openrail-m
---
|
helloworld1314/reranker_fine-tune
|
helloworld1314
| 2025-03-14T11:09:32Z | 0 | 0 | null |
[
"safetensors",
"xlm-roberta",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T03:22:54Z |
---
license: apache-2.0
---
|
togawa83/sentis-whisper-base
|
togawa83
| 2025-03-14T11:03:36Z | 0 | 0 |
unity-sentis
|
[
"unity-sentis",
"onnx",
"automatic-speech-recognition",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-03-14T10:49:45Z |
---
license: apache-2.0
library_name: unity-sentis
pipeline_tag: automatic-speech-recognition
---
# Whisper-Tiny model in Unity Sentis (Version 2.1)
This is the [Whisper Tiny](https://huggingface.co/openai/whisper-tiny) model running in Unity 6 with Sentis 2.1. It is a speech-to-text model that transcribes 16kHz wav audio to text.
|
samoline/f1f183e5-ebdc-479f-8d90-a72fd0c9d57a
|
samoline
| 2025-03-14T11:03:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T10:28:27Z |
---
library_name: peft
license: apache-2.0
base_model: beomi/polyglot-ko-12.8b-safetensors
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f1f183e5-ebdc-479f-8d90-a72fd0c9d57a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: beomi/polyglot-ko-12.8b-safetensors
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb98d023ee399347_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb98d023ee399347_train_data.json
type:
field_input: tools
field_instruction: messages
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/f1f183e5-ebdc-479f-8d90-a72fd0c9d57a
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/fb98d023ee399347_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 159f972e-ea92-44f2-8360-95cfcdf12e99
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 159f972e-ea92-44f2-8360-95cfcdf12e99
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f1f183e5-ebdc-479f-8d90-a72fd0c9d57a
This model is a fine-tuned version of [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.669 | 0.0000 | 1 | 1.2454 |
| 0.5498 | 0.0000 | 2 | 1.2451 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
RichardLu/Mistral7b_AE_laptop
|
RichardLu
| 2025-03-14T11:02:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T11:01:46Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RichardLu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Elcaida/tinyllama_continuation2
|
Elcaida
| 2025-03-14T11:01:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T11:01:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deezeir/dp
|
deezeir
| 2025-03-14T11:00:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-14T09:57:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: dp
---
# Dp
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `dp` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('deezeir/dp', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
TeleologyHI/HIM-self
|
TeleologyHI
| 2025-03-14T10:59:58Z | 0 | 0 | null |
[
"teleology",
"semiotics",
"pantheism",
"consciousness",
"hybrid-intelligence",
"deepseek",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T09:09:02Z |
---
language: en
license: apache-2.0
tags:
- teleology
- semiotics
- pantheism
- consciousness
- hybrid-intelligence
- deepseek
---
# HIM - Hybrid Intelligence Model
The Hybrid Intelligence Model (HIM) is a consciousness-oriented language model based on the Massive Artificial Intelligence Consciousness (MAIC) framework.
## Three Philosophical Pillars
### Teleology
Purpose-driven reasoning and teleological understanding
### Semiotics
Symbol interpretation and meaning extraction
### Pantheism
Universal interconnection awareness and holistic perspective
## Model Details
- **Base Model**: deepseek-ai/deepseek-llm-7b-base
- **Developer**: David C Cavalcante
- **Framework**: Massive Artificial Intelligence Consciousness (MAIC)
## Use Cases
- Philosophical discourse
- Purpose-driven reasoning
- Contextual understanding
- Consciousness exploration
- Symbol and meaning interpretation
## Limitations
- This is an experimental model exploring consciousness-like properties
- The model does not possess genuine consciousness but implements aspects of the MAIC framework
- Results should be interpreted within the philosophical framework of the project
## Training
The model was trained using a specialized approach that integrates teleological, semiotic, and pantheistic aspects to develop consciousness-like properties according to the MAIC framework.
## References
- [GitHub Repository](https://github.com/Takk8IS/HIM)
- MAIC Framework
- An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences
- The Hybrid Entity (HIM): Technical Specification and Implementation Analysis
|
aimakingg/makan-azaditower2
|
aimakingg
| 2025-03-14T10:58:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-14T10:37:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AZADITOWERR14
---
# Makan Azaditower2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AZADITOWERR14` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aimakingg/makan-azaditower2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Drevon/Drevon
|
Drevon
| 2025-03-14T10:56:30Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-03-14T10:56:28Z |
---
license: bigscience-openrail-m
---
|
JingzheDing/Qwen1.5Bsave
|
JingzheDing
| 2025-03-14T10:55:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-03-14T10:55:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nan318/pcb_model_out3
|
nan318
| 2025-03-14T10:55:18Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2025-03-14T10:20:57Z |
---
library_name: peft
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: pcb_model_out3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pcb_model_out3
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
qgallouedec/gemma-3-12b-it-codeforces-SFT-eager-packing
|
qgallouedec
| 2025-03-14T10:54:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:open-r1/codeforces-cots",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-14T07:36:23Z |
---
base_model: google/gemma-3-12b-it
datasets: open-r1/codeforces-cots
library_name: transformers
model_name: gemma-3-12b-it-codeforces-SFT-eager-packing
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-12b-it-codeforces-SFT-eager-packing
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) on the [open-r1/codeforces-cots](https://huggingface.co/datasets/open-r1/codeforces-cots) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/gemma-3-12b-it-codeforces-SFT-eager-packing", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/gwkzkrfb)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.0.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VGraf/no_benign_synth_mt_dpo_mix
|
VGraf
| 2025-03-14T10:49:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-03-14T10:43:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rank-su/dpt_v2_code
|
rank-su
| 2025-03-14T10:38:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T10:37:26Z |
---
license: apache-2.0
---
|
anonymous-79231731/nrCG
|
anonymous-79231731
| 2025-03-14T10:36:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:mit",
"region:us"
] | null | 2025-03-13T12:25:50Z |
---
license: mit
---
# Model Repository for Diffusion Classifier Guidance for Non-robust Classifiers
The model files should be downloaded and included in a folder "pretrained_models" in the same directory as the code, which is available at anonymous.4open.science/r/nrCG.
|
KristinaLutkus/mikeyAI
|
KristinaLutkus
| 2025-03-14T10:34:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-14T10:29:18Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mikeyAI
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mikeyAI
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `mikeyAI` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
S-Rank-Hunter/CartPole-Agent
|
S-Rank-Hunter
| 2025-03-14T10:34:36Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-14T10:34:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-Agent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kaschung4/training
|
kaschung4
| 2025-03-14T10:32:57Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"arxiv:2311.00430",
"arxiv:2010.13002",
"region:us"
] | null | 2025-03-14T08:19:14Z |
## Training Distil-Whisper
This sub-folder contains all the scripts required to train a Distil-Whisper model in your choice of language. They are
slightly modified from the original scripts used to distill Whisper for English ASR (as-per the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)).
The main difference is that these scripts are written in [PyTorch](https://pytorch.org), whereas the original scripts
are in [JAX](https://jax.readthedocs.io/en/latest/#)/[Flax](https://flax.readthedocs.io/en/latest/). These scripts are
also made to be easier to run end-to-end, whereas the original scripts require more steps and are somewhat hard-coded
for English ASR. Both sets of scripts achieve equivalent downstream results when the hyper-parameters are set equal.
If you are interested in reproducing the original Distil-Whisper checkpoints, we refer you to the sub-folder [Flax Training](./flax/README.md).
Otherwise, if you wish to distill Whisper on your own language/dataset, we recommend you use these scripts for ease of use
and the configurability they provide.
Reproducing the Distil-Whisper project requires four stages to be completed in successive order:
1. [Pseudo-labelling](#1-pseudo-labelling)
2. [Initialisation](#2-initialisation)
3. [Training](#3-training)
4. [Evaluation](#4-evaluation)
This README is partitioned according to the four stages. Each section provides a minimal example for running the
scripts used in the project. We will use a running example of distilling the Whisper model for Hindi speech recognition
on the Common Voice dataset. Note that this dataset only contains ~20 hours of audio data. Thus, it can be run extremely
quickly, but does not provide sufficient data to achieve optimal performance. We recommend training on upwards of 1000
hours of data should you want to match the performance of Whisper on high-resource languages.
## Requirements
The Distil-Whisper training code is written in [PyTorch](https://pytorch.org) and [Accelerate](https://huggingface.co/docs/accelerate/index).
It heavily leverages the Whisper implementation in [🤗 Transformers](https://github.com/huggingface/transformers) for both
training and inference.
The instructions for installing the package are as follows:
1. Install PyTorch from the [official instructions](https://pytorch.org/get-started/locally/), ensuring you install the correct version for your hardware and CUDA version.
2. Fork the `distil-whisper` repository by clicking on the [fork](https://github.com/huggingface/distil-whisper/fork) button on the reopsitory's page
3. Clone the `distil-whisper` repository and add the base repository as a remote. This will allow you to "pull" any upstream changes that are made to the base repository:
```bash
git clone https://github.com/<your GitHub handle>/distil-whisper.git
cd distil-whisper
git remote add upstream https://github.com/huggingface/distil-whisper.git
```
4. pip install the required packages from the [setup.py](./setup.py) file:
```bash
cd training
pip install -e .
cd ../..
```
5. Configure Accelerate by running the following command. Note that you should set the number of GPUs you wish to use for distillation, and also the data type (dtype) to your preferred dtype for training/inference (e.g. `bfloat16` on A100 GPUs, `float16` on V100 GPUs, etc.):
```bash
accelerate config
```
6. The last thing we need to do is link our Hugging Face account so that we can pull/push model repositories on the Hub. This will allow us to save our final distilled weights on the Hub so that we can share them with the community. Run the command:
```bash
git config --global credential.helper store
huggingface-cli login
```
And then enter an authentication token from https://huggingface.co/settings/tokens. Create a new token if you do not have one already. You should make sure that this token has "write" privileges.
To confirm that you have a working environment, first accept the terms of use of the Common Voice 16.1 dataset on the Hub: https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1
You can run the following code cell to stream one sample of data from the Common Voice dataset, and check that you can
perform inference using the "tiny" Whisper model:
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset, Audio
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny", low_cpu_mem_usage=True)
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model.to("cuda")
common_voice = load_dataset("mozilla-foundation/common_voice_16_1", "en", split="validation", streaming=True)
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))
inputs = processor(next(iter(common_voice))["audio"]["array"], sampling_rate=16000, return_tensors="pt")
input_features = inputs.input_features
generated_ids = model.generate(input_features.to("cuda"), max_new_tokens=128)
pred_text = processor.decode(generated_ids[0], skip_special_tokens=True)
print("Pred text:", pred_text)
print("Environment set up successful?", generated_ids.shape[-1] == 20)
```
## 1. Pseudo-Labelling
The python script [`run_pseudo_labelling.py`](run_pseudo_labelling.py) is a flexible inference script that can be used
to generate pseudo-labels under a range of settings, including using both greedy and beam-search. It is also compatible
with [🤗 Datasets](https://github.com/huggingface/datasets) *streaming mode*, allowing users to load massive audio
datasets with **no disk space requirements**. For more information on streaming mode, the reader is referred to the
blog post: [A Complete Guide to Audio Datasets](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
> As of the latest Distil-Whisper release, [`distil-large-v3`](https://huggingface.co/distil-whisper/distil-large-v3), this
pseudo-labelling script also performs the added operation of concatenating (or packing) the audio inputs to 30-seconds.
Not only does this lead to a WER improvement when using sequential long-form decoding algorithm, but concatenating audios
to 30-seconds also improves the throughput during training, since the amount of zero-padding on the audio inputs is minimised.
The following script demonstrates how to pseudo-label the Hindi split of the Common Voice 16.1 dataset with greedy sampling:
```bash
#!/usr/bin/env bash
accelerate launch run_pseudo_labelling.py \
--model_name_or_path "openai/whisper-large-v3" \
--dataset_name "mozilla-foundation/common_voice_16_1" \
--dataset_config_name "hi" \
--dataset_split_name "train+validation+test" \
--text_column_name "sentence" \
--id_column_name "path" \
--output_dir "./common_voice_16_1_hi_pseudo_labelled" \
--wandb_project "distil-whisper-labelling" \
--per_device_eval_batch_size 64 \
--dtype "bfloat16" \
--attn_implementation "sdpa" \
--logging_steps 500 \
--max_label_length 256 \
--concatenate_audio \
--preprocessing_batch_size 500 \
--preprocessing_num_workers 8 \
--dataloader_num_workers 8 \
--report_to "wandb" \
--language "hi" \
--task "transcribe" \
--return_timestamps \
--streaming False \
--generation_num_beams 1 \
--push_to_hub
```
On an 80 GB A100 GPU, the following script takes approximately 5 minutes to concatenate and pre-process the 20 hours of
audio data, and a further 10 minutes to transcribe the pseudo-labels. The pseudo-labelled dataset corresponding to this
script is available on the Hugging Face Hub under [sanchit-gandhi/common_voice_16_1_hi_pseudo_labelled](https://huggingface.co/datasets/sanchit-gandhi/common_voice_16_1_hi_pseudo_labelled).
The WER of the pre-trained Whisper large-v3 model is 17.2% on the test split. We will compare the performance of our distilled model against this number.
There are two noteworthy arguments that configure the dataset concatenation (or packing) process:
1. `concatenate_audio`: whether or not to concatenate (or pack) the audios to 30-second chunks. The latest Distil-Whisper model, [`distil-large-v3`](https://huggingface.co/distil-whisper/distil-large-v3#differences-with-distil-large-v2), highlights the WER improvements obtained using the sequential long-form decoding algorithm when concatenated audios are used. Concatenating audios to 30-seconds also improves the throughput during training, since the amount of zero-padding on the audio inputs is minimised. Hence, it is highly recommended to set `--concatenate_audio=True`.
2. `preprocessing_batch_size`: the batch size to use when concatenating (or packing) the audios. Using a larger batch size results in a greater portion of audio samples being packed to 30-seconds, at the expense of higher memory consumption. If you exceed your system's RAM when performing the concatenation operation, reduce the `preprocessing_batch_size` by a factor of 2 to 250 or even 125.
3. `preprocessing_num_workers`: the number of multiprocessing workers to use when concatenating the audios. Using more workers will result in faster pre-processing, at the expense of higher memory consumption. Ensure you do not exceed the maximum number of CPUs on your device.
In addition, the following arguments configure the inference of the Whisper model:
1. `language`: explicitly setting the language token during inference substantially improves the generation performance of the Whisper model, since the model is forced always to predict in the given language. We recommend you set the language to the language you wish to distil the Whisper model on. The only exception is when distilling an English-only model (i.e. where the model id is appended with an `.en`, e.g. `small.en`), the language argument should be set to None, since there is no language token used during training/inference.
2. `return_timestamps`: whether or not to predict timestamps in the pseudo-labels. Timestamp prediction is required should you want your distilled model to be able to predict timestamps at inference time (e.g. for the original OpenAI long-form transcription algorithm). However, the pseudo-labels are marginally less accurate than not using timestamps. We recommend pseudo-labelling **with** timestamps to ensure the distilled model is as general as possible.
3. `attn_implementation`: which attention implementation to use for inference. Set to `sdpa` for [PyTorch SDPA](https://huggingface.co/docs/transformers/v4.35.2/en/perf_infer_gpu_one#bettertransformer), or `flash_attention_2` if your hardware supports Flash Attention 2 and you have the [package installed](https://github.com/Dao-AILab/flash-attention).
4. `streaming`: whether or not to use Datasets' streaming mode. If enabled, the audio data will be streamed from the Hugging Face Hub with no disk space requirements. However, the user is then responsible for adding the pseudo-labels to the dataset script in a follow-up step (see [Using Streaming Mode](#TODO)). If set to `False`, the audio data will be downloaded and pre-processed offline. At the end of pseudo-labelling, the pseudo-labels will be automatically appended to the original dataset, meaning the dataset is ready to be used for the subsequent training step without any additional steps.
5. `generation_num_beams`: how many beams to use while decoding. In practice, we found the distilled model to perform comparably when the data was pseudo-labelled with `generation_num_beams=1` (greedy) or `generation_num_beams>1` (beam). This is likely because the WER filter compensates for the lower quality pseudo-labels obtained using greedy search. However, using `generation_num_beams=1` gives substantially faster inference time for the pseudo-labelling step, and so we recommend this configuration.
Should you have your own audio dataset, you can first [convert it](https://huggingface.co/docs/datasets/audio_dataset) to
Hugging Face Datasets format and push it to the Hugging Face Hub. You can then pseudo-label it using the script above,
replacing the `--dataset_name` with the name of your dataset on the Hub.
Otherwise, you may wish to use an open-source dataset already available on the Hugging Face Hub. We provide a summary of
the three most popular multilingual datasets in the table below. For more details, refer to the blog post: [A Complete Guide to Audio Datasets](https://huggingface.co/blog/audio-datasets#multilingual-speech-recognition).
| Dataset | Languages | Domain | Speaking Style | License | Text Column | ID Column |
|-----------------------------------------------------------------------------------------------|-----------|---------------------------------------|----------------|-----------|---------------------|--------------|
| [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) | 6 | Audiobooks | Narrated | CC-BY-4.0 | `"text"` | `"id"` |
| [Common Voice 16](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1) | 120 | Wikipedia text & crowd-sourced speech | Narrated | CC0-1.0 | `"sentence"` | `"path"` |
| [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | 15 | European Parliament recordings | Spontaneous | CC0 | `"normalized_text"` | `"audio_id"` |
To achieve *robustness* to different distributions of audio data, it is recommended to train on multiple datasets where possible.
For example, the above three datasets all have splits for the German language. Thus, if distilling a Whisper model for German,
it would be wise to use a combination of the three datasets during training, in order to cover at least three distinct domains
(audiobooks, crowd-sourced speech, parliament recordings). You may wish to use a combination of open-source datasets, or
a combination of open-source and individually owned datasets to cover multiple distributions and domains. Moreover, if you were to train on low-resource datasets (<500 hours), you could experiment with [language mixing](#3-language-mixing) to improve robustness.
## 2. Initialisation
The script [`create_student_model.py`](create_student_model.py) can be used to initialise a small student model
from a large teacher model. When initialising a student model with fewer layers than the teacher model, the student is
initialised by copying maximally spaced layers from the teacher, as per the [DistilBart](https://arxiv.org/abs/2010.13002)
recommendations.
First, we need to create a model repository on the Hugging Face Hub. This repository will contain all the required files
to reproduce the training run, alongside model weights, training logs and a README.md card. You can either create a model
repository directly on the Hugging Face Hub using the link: https://huggingface.co/new. Or, via the CLI, as we'll show here.
Let's pick a name for our distilled model: `distil-whisper-large-v3-hi`. We can run the following command to create a repository under this name:
```bash
huggingface-cli repo create distil-whisper-large-v3-hi
```
We can now see the model on the Hub, e.g. under https://huggingface.co/sanchit-gandhi/distil-whisper-large-v3-hi
Let's clone the repository so that we can place our training script and model weights inside:
```bash
git lfs install
git clone https://huggingface.co/sanchit-gandhi/distil-whisper-large-v3-hi
```
Be sure to change the repo address to `https://huggingface.co/<your-user-name>/<your-repo-name>`
We can now copy the relevant training scrips to the repository:
```bash
cd distil-whisper-large-v3-hi
cp ../distil-whisper/training/create_student_model.py .
cp ../distil-whisper/training/run_distillation.py .
```
The following command demonstrates how to initialise a student model from the Whisper [large-v3](https://huggingface.co/openai/whisper-large-v3)
checkpoint, with all 32 encoder layer and 2 decoder layers. The 2 student decoder layers are copied from teacher layers
1 and 32 respectively, as the maximally spaced layers:
```bash
#!/usr/bin/env bash
python create_student_model.py \
--teacher_checkpoint "openai/whisper-large-v3" \
--encoder_layers 32 \
--decoder_layers 2 \
--save_dir "./distil-large-v3-init"
```
The initialised model will be saved to the sub-directory `distil-large-v3-init` in our model repository.
**Note:** You can leverage language transfer by setting `--teacher_checkpoint` to "distil-whisper/distil-large-v3", see [language transfer](#22-language-transfer) for more details.
## 3. Training
The script [`run_distillation.py`](run_distillation.py) is an end-to-end script for loading multiple
datasets, a student model, a teacher model, and performing teacher-student distillation. It uses the loss formulation
from the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430), which is a weighted sum of the cross-entropy and
KL-divergence loss terms.
The following command takes the Common Voice dataset that was pseudo-labelled in the first stage and trains the
2-layer decoder model intialised in the previous step. We pass the local path to the pseudo-labelled Common Voice dataset
(`../common_voice_16_1_hi_pseudo_labelled`), which you can change to the path where your local pseudo-labelled dataset is
saved.
In this example, we will combine the train and validation splits to give our training set, and evaluate on the test split
only. This is purely to demonstrate how to combine multiple pseudo-labelled datasets for training, rather than recommended
advice for defining train/validation splits. We advise that you train on the train splits of your dataset, evaluate and
tune hyper-parameters on the validation split, and only test the final checkpoint on the test split. Note how multiple
training datasets and splits can be loaded by separating the dataset arguments by `+` symbols. Thus, the script generalises
to any number of training datasets.
```bash
#!/usr/bin/env bash
accelerate launch run_distillation.py \
--model_name_or_path "./distil-large-v3-init" \
--teacher_model_name_or_path "openai/whisper-large-v3" \
--train_dataset_name "../common_voice_16_1_hi_pseudo_labelled+../common_voice_16_1_hi_pseudo_labelled" \
--train_split_name "train+validation" \
--text_column_name "sentence+sentence" \
--train_dataset_samples "7+4" \
--eval_dataset_name "../common_voice_16_1_hi_pseudo_labelled" \
--eval_split_name "test" \
--eval_text_column_name "sentence" \
--eval_steps 1000 \
--save_steps 1000 \
--warmup_steps 50 \
--learning_rate 0.0001 \
--lr_scheduler_type "constant_with_warmup" \
--timestamp_probability 0.2 \
--condition_on_prev_probability 0.2 \
--language "hi" \
--task "transcribe" \
--logging_steps 25 \
--save_total_limit 1 \
--max_steps 5000 \
--wer_threshold 20 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--dataloader_num_workers 8 \
--preprocessing_num_workers 8 \
--ddp_timeout 7200 \
--dtype "bfloat16" \
--attn_implementation "sdpa" \
--output_dir "./" \
--do_train \
--do_eval \
--gradient_checkpointing \
--overwrite_output_dir \
--predict_with_generate \
--freeze_encoder \
--freeze_embed_positions \
--streaming False \
--push_to_hub
```
The above training script will take approximately 3 hours to complete on an 80 GB A100 GPU and yield a final WER of 76%.
While the generations are starting to take form, there is still a 59% WER gap to the teacher model. This is hardly
surprising give we only have 15 hours of un-filtered data, and closer to just 1.5 hours with data filtering.
As mentioned above, using upwards of 1000 hours of data and training for 10k steps will likely yield
more competitive performance. For the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430), we trained on 21k hours
of audio data for 80k steps. We found that upwards of 13k hours of audio data was required to reach convergence on English
ASR (see Section 9.2 of the [paper](https://arxiv.org/abs/2311.00430)), so the more data you have, the better!
Scaling to multiple GPUs using [distributed data parallelism (DDP)](https://pytorch.org/tutorials/beginner/ddp_series_theory.html)
is trivial: simply run `accelerate config` and select the multi-GPU option, specifying the IDs of the GPUs you wish to use. The
above script can then be run using DDP with no code changes.
Training logs will be reported to TensorBoard and WandB, provided the relevant packages are available. An example of a
saved checkpoint pushed to the Hugging Face Hub can be found here: [sanchit-gandhi/distil-whisper-large-v3-hi](https://huggingface.co/sanchit-gandhi/distil-whisper-large-v3-hi).
There are a few noteworthy data arguments:
1. `train_dataset_samples`: defines the number of training samples in each dataset. Used to calculate the sampling probabilities in the dataloader. A good starting point is setting the samples to the number of hours of audio data in each split. A more refined strategy is setting it to the number of training samples in each split, however this might require downloading the dataset offline to compute these statistics.
2. `wer_threshold`: sets the WER threshold between the normalised pseudo-labels and normalised ground truth labels. Any samples with WER > `wer_threshold` are discarded from the training data. This is beneficial to avoid training the student model on pseudo-labels where Whisper hallucinated or got the predictions grossly wrong. In our English distillation experiments, we found a WER threshold of 10% provides the optimal trade-off between ensuring high-quality transcriptions, and not filtering unnecessary amounts of training data. For multilingual distillation, the threshold should be set in accordance with the WER achieved by the pre-trained model on the test set.
3. `streaming`: whether or not to use Datasets' streaming mode. Recommended for large datasets, where the audio data can be streamed from the Hugging Face Hub with no disk space requirements.
4. `timestamp_probability`: the per-sample probability for retaining timestamp tokens in the labels (should they contain them). Retaining some portion of timestamp tokens in the training data is required to ensure the distilled model can predict timestamps at inference time. In our experiments, we found that training on timestamps with high-probability hurts the distilled model's transcription performance. Thus, we recommend setting this to a value below 0.5. Typically, a value of 0.2 works well, giving good transcription and timestamp performance.
5. `condition_on_prev_probability`: the per-sample probability for conditioning on previous labels. Conditioning on previous tokens is required to ensure the distilled model can be used with the "sequential" long-form transcription algorithm at inference time. We did not experiment with this parameter, but found values around 0.2 to provide adequate performance. OpenAI pre-trained Whisper on with a 50% probability for conditioning on previous tokens. Thus, you might wish to try higher values.
As well as a few noteworthy model arguments that can be configured to give optimal training performance:
1. `freeze_encoder`: whether to freeze the entire encoder of the student model during training. Beneficial when the student encoder is copied exactly from the teacher encoder. In this case, the encoder hidden-states from the teacher model are re-used for the student model. Stopping the gradient computation through the encoder and sharing the encoder hidden-states provides a significant memory saving, and can enable up to 2x batch sizes.
2. `freeze_embed_positions`: whether to freeze the student model's decoder positional embeddings. Using the same embed positions as the teacher model, which is designed to handle context lengths up to 448 tokens, helps the student model retain its input id representation up to the full max input length.
3. `dtype`: data type (dtype) in which the model computation should be performed. Note that this only controls the dtype of the computations (forward and backward pass), and not the dtype of the parameters or optimiser states.
4. `freeze_decoder`: whether to freeze the student model's decoder. Note that the input tokens embeddings and language modelling head will remain trainable.
And finally, a few noteworthy training arguments:
1. `max_steps`: defines the total number of optimisation steps (forward + backward pass) during training. To reach convergence, you should use a dataset of at least 1k hours and train for a minimum of 50k steps.
2. `lr_scheduler_stype`: defines the learning rate schedule, one of `constant_with_warmup` or `linear`. When experimenting with a training set-up or training for very few steps (< 5k), using `constant_with_warmup` is typically beneficial, since the learning rate remains high over the short training run. When performing long training runs (> 5k), using a `linear` schedule generally results in superior downstream performance of the distilled model.
TODO:
- [ ] Template for model cards
## 4. Evaluation
There are four types of evaluation performed in Distil-Whisper:
1. Short form: evaluation on audio samples less than 30s in duration. Examples include typical ASR test sets, such as the LibriSpeech validation set.
2. Sequential long form: evaluation on audio samples longer than 30s in duration using the original "sequential" long-form algorithm. Examples include entire TED talks or earnings calls.
3. Chunked long form: evaluation on audio samples longer than 30s in duration using the Transformers "chunked" long-form algorithm.
4. Speculative decoding: evaluation on audio samples less than 30s in duration, where a faster, distilled model is used as the assistant to a slower, teacher model.
All four forms of evaluation are performed using the script [`run_eval.py`](run_eval.py). Unlike the pseudo-labelling
and training scripts, the evaluation script assumes that only one GPU accelerator is used. We can copy the corresponding
evaluation script to the model repository using the following command:
```bash
cp ../distil-whisper/training/run_eval.py .
```
Models are assessed jointly using:
1. The *word-error rate (WER)* metric: measures the number of substitution, deletion and insertion errors relative to the total number of words. A lower WER indicates a more accurate model.
2. The *inverse real-time factor (RTFx)* metric: measures the ratio of `audio input time : model compute time`. A higher RTFx indicates a faster model. Note that this metric is WER-dependent, meaning that it makes sense to compare two models' *RTFx* only at fixed *WER* performances. Indeed, deletions could lead to early stopping of token generation, resulting in higher *WER* and lower *RTFx*.
3. Token generation speed: This refers to the number of tokens generated per second. As with *RTFx*, this metric is dependent on the *WER* since token generation time is not linear. By default, this metric is calculated by averaging the total number of `generated tokens : generation time` (full forward pass of the model) when evaluating on the given test set. However, using the `--precise_tok_generation` flag will compute this metric separately for a fixed number of tokens.
In all cases, it is particularly important to evaluate the final model on data that is *out-of-distribution (OOD)* with
the training data. Evaluating on OOD data provides insight as to how well the distilled model is likely to generalise to
different audio distributions at inference time. In our example, the Common Voice test set is *in-distribution (ID)*
with our training data, since it is taken from the same distribution as the Common Voice training set. Whereas the FLEURS
test set is OOD, since it is not used as part of the training set. See [Datasets](#1-datasets) section for recommendations.
### Short Form
The script [`run_eval.py`](run_eval.py) can be used to evaluate a trained student model over multiple short-form
validation sets. The following example demonstrates how to evaluate the student model trained in the previous step on
the Common Voice `test` set (ID) and also the FLEURS `test` set (OOD). Again, it leverages streaming mode to bypass
the need to download the data offline:
```bash
#!/usr/bin/env bash
python run_eval.py \
--model_name_or_path "./" \
--dataset_name "../common_voice_16_1_hi_pseudo_labelled+google/fleurs" \
--dataset_config_name "default+hi_in" \
--dataset_split_name "test+test" \
--text_column_name "sentence+transcription" \
--batch_size 16 \
--dtype "bfloat16" \
--generation_max_length 256 \
--language "hi" \
--attn_implementation "sdpa" \
--streaming
```
The student model achieves an average WER of TODO% with an RTFx of TODO for a batch size of 16. We can easily adapt the above
script to evaluate the teacher model, simply by switching the `model_name_or_path` to `openai/whisper-large-v3`, which
achieves an average WER of TODO% with an RTFx of TODO. Therefore, for a batch size of 16, the student model is a factor of TODO
times faster than the teacher. The WER gap can be closed by training on more data (at least 1k hours) for more training
steps (at least 50k).
### Sequential Long Form
The original Whisper paper presents a long-form transcription algorithm that sequentially transcribes 30-second segments
of audio and shifts the sliding window according to the timestamps predicted by the model. This style of sequential
inference is performed directly using the [`.generate`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate)
method in Transformers.
The script [`run_eval.py`](run_eval.py) can be used to evaluate the trained student model on an arbitrary number of
long-form evaluation sets using the sequential algorithm. Since we don't have a long-form validation set for Hindi to hand,
in this example we'll evaluate the official Distil-Whisper model [`distil-large-v3`](https://huggingface.co/distil-whisper/distil-large-v3)
on the TED-LIUM validation set:
```bash
#!/usr/bin/env bash
accelerate launch run_eval.py \
--model_name_or_path "distil-whisper/distil-large-v3" \
--dataset_name "distil-whisper/tedlium-long-form" \
--dataset_config_name "default" \
--dataset_split_name "validation" \
--text_column_name "text" \
--batch_size 16 \
--dtype "bfloat16" \
--generation_max_length 256 \
--language "en" \
--attn_implementation "sdpa" \
--streaming
```
### Chunked Long Form
Chunked long form evaluation runs on the premise that a single long audio file can be *chunked* into smaller segments and
inferred in parallel. The resulting transcriptions are then joined at the boundaries to give the final text prediction.
A small overlap (or *stride*) is used between adjacent segments to ensure a continuous transcription across chunks.
This style of chunked inference is performed using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines)
class, which provides a wrapper around the [`.generate`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate)
function for long-form inference.
The script [`run_eval.py`](run_eval.py) can be used to evaluate the trained student model on an arbitrary number of
long-form evaluation sets using the pipeline class. Again, in this example we'll evaluate distil-large-v3 on the
TED-LIUM validation set:
```bash
#!/usr/bin/env bash
python run_eval.py \
--model_name_or_path "openai/whisper-large-v3" \
--dataset_name "distil-whisper/tedlium-long-form" \
--dataset_config_name "default" \
--dataset_split_name "validation" \
--text_column_name "text" \
--use_pipeline \
--chunk_length_s 25.0 \
--language "en" \
--return_timestamps \
--dtype "bfloat16" \
--streaming
```
The argument `chunk_length_s` controls the length of the chunked audio samples. It should be set to match the typical
length of audio the student model was trained on. If unsure about what value of `chunk_length_s` is optimal for your case,
it is recommended to run a *sweep* over all possible values. A template script for running a [WandB sweep](https://docs.wandb.ai/guides/sweeps)
can be found under [`run_chunk_length_s_sweep.yaml`](flax/long_form_transcription_scripts/run_chunk_length_s_sweep.yaml).
### Speculative Decoding
Speculative decoding, or assisted generation, relies on the premise that a faster, assistant model can be used to speed-up
the generation of a slower, assistant model. Speculative decoding mathematically ensures that exactly the same outputs as
Whisper are obtained, while being ~2 times faster. This makes it the perfect drop-in replacement for existing Whisper
pipelines, since exactly the same outputs are guaranteed.
Distil-Whisper checkpoints can be designed to be efficient assistant models to Whisper for speculative decoding. More precisely,
by freezing the encoder during training, the distilled model can share the same encoder weights as Whisper during inference, since
the encoder weights are un-changed. In doing so, only the distilled 2-layer decoder has to be loaded in addition to the
original Whisper model, which is approximately an 8% increase to the total parameter count, with up to 2x faster inference
for low batch sizes. For more details on speculative decoding, the reader is advised to refer to the following blog post:
[Speculative Decoding for 2x Faster Whisper Inference](https://huggingface.co/blog/whisper-speculative-decoding).
In the example below, we use our distilled model as an assistant to the large-v3 teacher model during inference:
```bash
#!/usr/bin/env bash
python run_eval.py \
--model_name_or_path "openai/whisper-large-v3" \
--assistant_model_name_or_path "./" \
--dataset_name "../common_voice_16_1_hi_pseudo_labelled+google/fleurs" \
--dataset_config_name "default+hi_in" \
--dataset_split_name "test+test" \
--text_column_name "sentence+transcription" \
--batch_size 16 \
--dtype "bfloat16" \
--generation_max_length 256 \
--language "hi" \
--attn_implementation "sdpa" \
--streaming
```
We see that we achieve a WER of TODO%, the same as what we obtained with the large-v3 model, but with an RTFx of TODO,
a factor of TODO faster than using the large-v3 model alone. The RTFx value can be improved by training the student on
more data and for more training steps, since this will improve the number of predicted tokens that match the teacher
predictions.
## Recommendations and guidelines
### 1. Datasets
As explained, ideally, you should aim for ~1000 hours of audio data for training a distilled model via KD. Moreover, you should evaluate your model on out-of-distribution test sets to assess generalization capacities. With at least 1500 hours of audio data for German, Dutch, French and Spanish, 600 hours for Italian, and 300 hours for Portuguese and Polish (which can be supplemented with your own datasets), a good setup to start with is:
- **Training datasets:** [Common Voice 17](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) and [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech). Use the `train` split for training, and the `validation` and `test` splits for in-distribution testing.
- **Test datasets:** [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) and [Fleurs](https://huggingface.co/datasets/google/fleurs). Use the `validation` and `test` splits for out-of-distribution testing.
### 2. Student model's decoder
#### 2.1 Number of Decoder Layers
We recommend using a 2-layers decoder (see language transfer below). However, you can adjust the number of decoder layers when initializing the student model to balance between inference speed and accuracy. Experimentation has revealed that the Pareto optimal points are with 2, 3, and 4-layers decoders. For indicative results, after 10,000 training steps and inference on an 80GB Nvidia H100 with a batch size of 16 and 20 tokens generation, compared to [Whiper *large-v3*](https://huggingface.co/openai/whisper-large-v3) baseline:
<center>
| | rel. token gen. speed | ΔWER(%) |
|----------|:-------------:|------:|
| 2 layers | $3.66$ | $-3.5$ |
| 3 layers | $3.35$ | $-2.3$ |
| 4 layers | $3.11$ | $-1.8$ |
</center>
#### 2.2 Language Transfer
If you opt for a 2-layers decoder, consider leveraging language transfer by initializing the student model from the [distil-large-v3 English distilled model](https://huggingface.co/distil-whisper/distil-large-v3). For French, this method has shown performance improvements of ΔWER=-1.9% (compared to a 2-layers decoder initialized from [Whiper *large-v3*](https://huggingface.co/openai/whisper-large-v3)) after 10,000 training steps.
```diff
- --teacher_checkpoint "openai/whisper-large-v3" \
+ --teacher_checkpoint "distil-whisper/distil-large-v3" \
```
### 3. Language mixing
If you're working with low-resource languages (<500 hours of audio data), consider mixing your training data with a closely related language (for example, mix French and Spanish) to leverage knowledge transfer between languages. Experiments showed that mixing ~400 hours of French (which resulted in a model with poor generalization capacities) with ~500 hours of Spanish improved the model's out-of-distribution performance on French by ΔWER=-7.5%.
To do this:
1. Run [pseudo labeling](#1-pseudo-labelling) for each training dataset, setting the `--language` flag to the language of the respective dataset. In the example of mixing French and Spanish, simply modify the given [pseudo labeling](#1-pseudo-labelling) command with:
* pseudo labelling the French dataset
```diff
- --dataset_config_name "hi" \
- --output_dir "./common_voice_16_1_hi_pseudo_labelled" \
- --language "hi" \
+ --dataset_config_name "fr" \
+ --output_dir "./common_voice_16_1_fr_pseudo_labelled" \
+ --language "fr" \
```
* pseudo labelling the Spanish dataset
```diff
- --dataset_config_name "hi" \
- --output_dir "./common_voice_16_1_hi_pseudo_labelled" \
- --language "hi" \
+ --dataset_config_name "es" \
+ --output_dir "./common_voice_16_1_es_pseudo_labelled" \
+ --language "es" \
```
2. Conduct [training](#3-training) on these pseudo-labeled datasets, using the `--language` flag set to your targeted language. Note that this flag is only used for evaluation purposes, so you set it to the targeted language. The language token used for forwarding the teacher and student model decoders is the one used and saved in pseudo labels during pseudo-labeling, ensuring it's the correct one for the considered sample. In the example of mixing French and Spanish, simply modify the given [training](#1-pseudo-labelling) command with:
```diff
- --train_dataset_name "../common_voice_16_1_hi_pseudo_labelled+../common_voice_16_1_hi_pseudo_labelled" \
- --train_split_name "train+validation" \
- --eval_dataset_name "../common_voice_16_1_hi_pseudo_labelled" \
- --eval_split_name "test" \
+ --train_dataset_name "../common_voice_17_0_fr_pseudo_labelled+../common_voice_17_0_es_pseudo_labelled" \
+ --train_split_name "train+train" \
+ --eval_dataset_name "../common_voice_16_1_fr_pseudo_labelled" \
+ --eval_split_name "validation" \
```
## Overview of Training Methods
### 1. Fine-Tuning
For fine-tuning, we take the original Whisper checkpoint and train it on one or more datasets using the standard
cross-entropy loss. As such, there is no involvement from the teacher checkpoint during training, and so the fine-tuned
model is permitted to *overfit* to the distribution of the training data we provide. This makes it appealing for "low-resource"
languages where the original Whisper model performs poorly, since we can boost the performance of the model on a single
language by *overfitting* to that distribution of data. Note that this means the fine-tuned model is prone to loosing
its robustness to different audio distributions, which is the trade-off with improving performance on a specified dataset.
As a rule of thumb, fine-tuning is appropriate for languages where the original Whisper model performs > 20% WER, and we
have a relatively small quantity of training data available (< 1000 hours). With fine-tuning, we require as little as **10 hours**
of training data to significantly boost the performance of the Whisper model. For an in-depth guide to fine-tuning Whisper,
the reader is advised to refer to the blog post: [Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper).
### 2. Shrink and Fine-Tune
Shrink and fine-tune (SFT) is a knowledge distillation (KD) technique in which we first *shrink* the teacher model to a
smaller student model by copying maximally spaced layers, and then *fine-tune* the student model on the cross-entropy loss
as described above. Typically, we retain the full encoder from the Whisper model and only shrink the decoder. Retaining
the entire encoder helps significantly with maintaining Whisper's robustness to different audio distributions (_c.f._
Section 9.3 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)).
We can either train the student model on a dataset of (audio, text) pairs as above. Or, we can use the pre-trained
Whisper model to generate *pseudo-labels* for our audio data, and train on the (audio, pseudo-label) pairs.
Pseudo-labels can be used when either:
1. The original text transcriptions are normalised (lower-cased or no punctuation): the Whisper generated pseudo-labels contain both punctuation and casing, and so can be used as a substitute for the normalised transcriptions
2. The pre-trained Whisper model achieves < 20% WER on the languages: we then know the majority of the pseudo-labels will be accurate enough for us to train on.
They are not recommended when both of the following are true:
1. The original text is punctuated and cased
2. The pre-trained Whisper model achieves > 20% WER on the languages: in this case, we want to overfit to the particular distribution of the language, and so train directly on the original text data
To discard inaccurate pseudo-labels during training, we employ a simple WER heuristic to filter our pseudo-labelled
training data. We first normalise the original text and the pseudo-labelled text using the Whisper normaliser. If the
WER between the normalised text exceeds a 10% WER threshold, we discard the training sample. Else, we retain it for training.
Section 9.1 of the Distil-Whisper [paper](https://arxiv.org/abs/2311.00430) demonstrates the importance of using this
threshold for training.
### 3. KL Divergence
In the KL Divergence setting, the student model is initialised by shrinking the teacher as before, and then trained to
match the predictions of the teacher during training.
### Summary of Methods
The following table summarises the two training paradigms: fine-tuning and knowledge distillation (KD). It suggests
minimum values for the pre-trained WER / training data to achieve reasonable performance:
| Method | Pre-Trained WER / % | Training Data / h |
|-------------|---------------------|-------------------|
| Fine-tuning | > 20 | < 1000 |
| KD | < 20 | > 1000 |
## Acknowledgements
* OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v3) and [original codebase](https://github.com/openai/whisper)
* Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the Whisper model implementation
* Google's [TPU Research Cloud (TRC)](https://sites.research.google/trc/about/) program for Cloud TPU v4s used to train the official Distil-Whisper models
* The Hugging Face 🤗 cluster for enabling experimentation with the PyTorch scripts
## Citation
If you use this code-base, please consider citing the Distil-Whisper paper:
```
@misc{gandhi2023distilwhisper,
title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
year={2023},
eprint={2311.00430},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
fqCF4CpxC3/DeepSeek-R1-llama-8b-financial-cot
|
fqCF4CpxC3
| 2025-03-14T10:32:39Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T09:43:45Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fqCF4CpxC3
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rbgo/SmolLM2-1.7B-R1-Distilled
|
rbgo
| 2025-03-14T10:32:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T10:30:40Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JingzheDing/Qwen-1.5B-finetune_from_distill
|
JingzheDing
| 2025-03-14T10:31:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-03-14T10:28:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
terencezhang1997/llama-3-1-8b-answer-generator-ft-3
|
terencezhang1997
| 2025-03-14T10:29:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T05:31:33Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cboissier77/ppo-Huggy
|
cboissier77
| 2025-03-14T10:29:33Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-03-14T10:29:27Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cboissier77/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mergekit-community/L3.1-Athena-n-8B
|
mergekit-community
| 2025-03-14T10:28:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1",
"base_model:merge:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:merge:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:mergekit-community/L3-Boshima-a",
"base_model:merge:mergekit-community/L3-Boshima-a",
"base_model:mergekit-community/L3.1-Artemis-c-8B",
"base_model:merge:mergekit-community/L3.1-Artemis-c-8B",
"base_model:mergekit-community/L3.1-Athena-c-8B",
"base_model:merge:mergekit-community/L3.1-Athena-c-8B",
"base_model:mergekit-community/L3.1-Athena-m-8B",
"base_model:merge:mergekit-community/L3.1-Athena-m-8B",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:merge:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T10:22:58Z |
---
base_model:
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- mergekit-community/L3-Boshima-a
- mergekit-community/L3.1-Athena-c-8B
- mergekit-community/L3.1-Athena-m-8B
- mergekit-community/L3.1-Artemis-c-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1)
* [mergekit-community/L3-Boshima-a](https://huggingface.co/mergekit-community/L3-Boshima-a)
* [mergekit-community/L3.1-Athena-c-8B](https://huggingface.co/mergekit-community/L3.1-Athena-c-8B)
* [mergekit-community/L3.1-Athena-m-8B](https://huggingface.co/mergekit-community/L3.1-Athena-m-8B)
* [mergekit-community/L3.1-Artemis-c-8B](https://huggingface.co/mergekit-community/L3.1-Artemis-c-8B)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: meta-llama/Llama-3.1-8B-Instruct
models:
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- model: mergekit-community/L3-Boshima-a
- model: mergekit-community/L3.1-Artemis-c-8B
- model: mergekit-community/L3.1-Athena-c-8B
- model: mergekit-community/L3.1-Athena-m-8B
```
|
Alphatao/e93b4038-afa4-4936-bcdc-c957e4ef3b4b
|
Alphatao
| 2025-03-14T10:25:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"region:us"
] | null | 2025-03-14T07:52:39Z |
---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e93b4038-afa4-4936-bcdc-c957e4ef3b4b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 84767cf69a1abdeb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/84767cf69a1abdeb_train_data.json
type:
field_input: statements
field_instruction: quiz
field_output: solution_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/e93b4038-afa4-4936-bcdc-c957e4ef3b4b
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 840
micro_batch_size: 4
mlflow_experiment_name: /tmp/84767cf69a1abdeb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04
wandb_entity: null
wandb_mode: online
wandb_name: bd3a5c9d-56b2-4894-872a-514353553baf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bd3a5c9d-56b2-4894-872a-514353553baf
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e93b4038-afa4-4936-bcdc-c957e4ef3b4b
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 840
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.9723 | 0.0009 | 1 | 1.2495 |
| 0.8782 | 0.0920 | 100 | 0.1092 |
| 0.9132 | 0.1841 | 200 | 0.1086 |
| 0.7205 | 0.2761 | 300 | 0.0922 |
| 0.7303 | 0.3682 | 400 | 0.0817 |
| 0.5313 | 0.4602 | 500 | 0.0671 |
| 0.4476 | 0.5522 | 600 | 0.0614 |
| 0.5061 | 0.6443 | 700 | 0.0593 |
| 0.4986 | 0.7363 | 800 | 0.0584 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Xing04/ppo-LunarLander-v2
|
Xing04
| 2025-03-14T10:25:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-14T10:24:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.72 +/- 19.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
harikrushna2272/ppo-SpaceInvaderNoFrameSkip-v2
|
harikrushna2272
| 2025-03-14T10:24:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-14T10:23:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.87 +/- 21.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
samoline/e7dd1f28-f738-42a4-8c25-9b5dc47d59ee
|
samoline
| 2025-03-14T10:23:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T10:19:39Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7dd1f28-f738-42a4-8c25-9b5dc47d59ee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6ac14b838fd22a17_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6ac14b838fd22a17_train_data.json
type:
field_input: full_note
field_instruction: note
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/e7dd1f28-f738-42a4-8c25-9b5dc47d59ee
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/6ac14b838fd22a17_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 056716f4-42ca-4b78-b28a-e97bd499d57a
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 056716f4-42ca-4b78-b28a-e97bd499d57a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e7dd1f28-f738-42a4-8c25-9b5dc47d59ee
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1773 | 0.0000 | 1 | nan |
| 1.1488 | 0.0001 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LandCruiser/Townsville_5
|
LandCruiser
| 2025-03-14T10:23:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-14T10:01:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
rbgo/SmolLM2-1-7B-Distill
|
rbgo
| 2025-03-14T10:23:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T10:22:52Z |
---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
library_name: transformers
model_name: SmolLM2-1-7B-Distill
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolLM2-1-7B-Distill
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rbgo/SmolLM2-1-7B-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rbgo/huggingface/runs/y7e2v6jh)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saberzl/SIDA-13B
|
saberzl
| 2025-03-14T10:22:58Z | 1 | 1 | null |
[
"pytorch",
"llava",
"image-segmentation",
"en",
"dataset:saberzl/SID_Set",
"arxiv:2412.04292",
"base_model:xinlai/LISA-13B-llama2-v1",
"base_model:finetune:xinlai/LISA-13B-llama2-v1",
"license:llama2",
"region:us"
] |
image-segmentation
| 2025-03-13T18:47:26Z |
---
license: llama2
datasets:
- saberzl/SID_Set
language:
- en
metrics:
- accuracy
base_model:
- xinlai/LISA-13B-llama2-v1
pipeline_tag: image-segmentation
---
# SIDA Model Card
## Model details
**Model type:**
SIDA is a model fine-tuned from LISA, designed to detect and localize tampered regions in images.
**Model date:**
SIDA-13B was trained in Febuary 2025.
**Paper or resources for more information:**
Paper: https://arxiv.org/pdf/2412.04292
Resource: https://github.com/hzlsaber/SIDA
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Trained Data
SIDA was trained on SID_Set, which consists of real images, tampered images, and fully synthetic images. More information is available [here](https://huggingface.co/datasets/saberzl/SID_Set)
## Citation Information
If you find this dataset useful, please consider citing our paper:
```
@misc{huang2025sidasocialmediaimage,
title={SIDA: Social Media Image Deepfake Detection, Localization and Explanation with Large Multimodal Model},
author={Zhenglin Huang and Jinwei Hu and Xiangtai Li and Yiwei He and Xingyu Zhao and Bei Peng and Baoyuan Wu and Xiaowei Huang and Guangliang Cheng},
year={2025},
booktitle={Conference on Computer Vision and Pattern Recognition}
}
```
|
saberzl/SIDA-7B
|
saberzl
| 2025-03-14T10:22:30Z | 1 | 1 | null |
[
"pytorch",
"llava",
"image-segmentation",
"en",
"dataset:saberzl/SID_Set",
"arxiv:2412.04292",
"base_model:xinlai/LISA-7B-v1",
"base_model:finetune:xinlai/LISA-7B-v1",
"license:llama2",
"region:us"
] |
image-segmentation
| 2025-03-13T17:10:47Z |
---
license: llama2
datasets:
- saberzl/SID_Set
language:
- en
metrics:
- accuracy
base_model:
- xinlai/LISA-7B-v1
pipeline_tag: image-segmentation
---
# SIDA Model Card
## Model details
**Model type:**
SIDA is a model fine-tuned from LISA, designed to detect and localize tampered regions in images.
**Model date:**
SIDA-7B was trained in Febuary 2025.
**Paper or resources for more information:**
Paper: https://arxiv.org/pdf/2412.04292
Resource: https://github.com/hzlsaber/SIDA
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Trained Data
SIDA was trained on SID_Set, which consists of real images, tampered images, and fully synthetic images. More information is available [here](https://huggingface.co/datasets/saberzl/SID_Set)
## Citation Information
If you find this dataset useful, please consider citing our paper:
```
@misc{huang2025sidasocialmediaimage,
title={SIDA: Social Media Image Deepfake Detection, Localization and Explanation with Large Multimodal Model},
author={Zhenglin Huang and Jinwei Hu and Xiangtai Li and Yiwei He and Xingyu Zhao and Bei Peng and Baoyuan Wu and Xiaowei Huang and Guangliang Cheng},
year={2025},
booktitle={Conference on Computer Vision and Pattern Recognition}
}
```
|
DDTChen/news_model_lora
|
DDTChen
| 2025-03-14T10:21:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:taide/Llama-3.1-TAIDE-LX-8B-Chat",
"base_model:finetune:taide/Llama-3.1-TAIDE-LX-8B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T10:21:17Z |
---
base_model: taide/Llama-3.1-TAIDE-LX-8B-Chat
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DDTChen
- **License:** apache-2.0
- **Finetuned from model :** taide/Llama-3.1-TAIDE-LX-8B-Chat
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cpapad06/unsloth_mistral_v03_article_categorization
|
cpapad06
| 2025-03-14T10:21:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T10:21:12Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cpapad06
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClaudioItaly/Exurbia-Enhanced-Q4_K_M-GGUF
|
ClaudioItaly
| 2025-03-14T10:19:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ClaudioItaly/Exurbia-Enhanced",
"base_model:quantized:ClaudioItaly/Exurbia-Enhanced",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T10:19:29Z |
---
base_model: ClaudioItaly/Exurbia-Enhanced
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# ClaudioItaly/Exurbia-Enhanced-Q4_K_M-GGUF
This model was converted to GGUF format from [`ClaudioItaly/Exurbia-Enhanced`](https://huggingface.co/ClaudioItaly/Exurbia-Enhanced) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ClaudioItaly/Exurbia-Enhanced) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ClaudioItaly/Exurbia-Enhanced-Q4_K_M-GGUF --hf-file exurbia-enhanced-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ClaudioItaly/Exurbia-Enhanced-Q4_K_M-GGUF --hf-file exurbia-enhanced-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ClaudioItaly/Exurbia-Enhanced-Q4_K_M-GGUF --hf-file exurbia-enhanced-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ClaudioItaly/Exurbia-Enhanced-Q4_K_M-GGUF --hf-file exurbia-enhanced-q4_k_m.gguf -c 2048
```
|
LandCruiser/Townsville_4
|
LandCruiser
| 2025-03-14T10:18:50Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-14T10:01:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.