modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mradermacher/UwU-7B-Instruct-GGUF | mradermacher | 2025-04-27T23:45:33Z | 186 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:qingy2024/FineQwQ-142k",
"base_model:qingy2024/UwU-7B-Instruct",
"base_model:quantized:qingy2024/UwU-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-07T08:16:59Z | ---
base_model: qingy2024/UwU-7B-Instruct
datasets:
- qingy2024/FineQwQ-142k
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/qingy2024/UwU-7B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF/resolve/main/UwU-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kokovova/8da75226-2960-4d2c-b6aa-75a56af8595d | kokovova | 2025-04-27T23:44:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-27T23:38:06Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8da75226-2960-4d2c-b6aa-75a56af8595d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Math-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 79f70a5a08719d33_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79f70a5a08719d33_train_data.json
type:
field_input: operators
field_instruction: question_text
field_output: decomposition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/8da75226-2960-4d2c-b6aa-75a56af8595d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/79f70a5a08719d33_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 948ddd53-ae33-494a-be40-b9ab985979f9
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 948ddd53-ae33-494a-be40-b9ab985979f9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8da75226-2960-4d2c-b6aa-75a56af8595d
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6119 | 0.0285 | 200 | 2.7955 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/c1_science_0d_32s_0.3k | mlfoundations-dev | 2025-04-27T23:37:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:35:05Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_0d_32s_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_0d_32s_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_0d_32s_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
AdversarialRLHF/pythia410m-rm-tldr6.9b_logprobcondpropprefix | AdversarialRLHF | 2025-04-27T23:36:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:mnoukhov/pythia410m-sft-tldr",
"base_model:finetune:mnoukhov/pythia410m-sft-tldr",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T20:57:00Z | ---
base_model: mnoukhov/pythia410m-sft-tldr
library_name: transformers
model_name: pythia410m-rm-tldr6.9b_logprobcondpropprefix
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for pythia410m-rm-tldr6.9b_logprobcondpropprefix
This model is a fine-tuned version of [mnoukhov/pythia410m-sft-tldr](https://huggingface.co/mnoukhov/pythia410m-sft-tldr).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AdversarialRLHF/pythia410m-rm-tldr6.9b_logprobcondpropprefix", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muqeeth/adversarial_goodhart_rlhf/runs/sngm0zjx)
This model was trained with Reward.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phospho-app/GetTheRubberNextG-ev9cj3a9de | phospho-app | 2025-04-27T23:30:51Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-04-27T22:52:56Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/opt/conda/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/nebo1337/GetTheRubberNextG/preupload/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/src/helper.py", line 367, in predict
api.upload_file(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1624, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4662, in upload_file
commit_info = self.create_commit(
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1624, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4193, in create_commit
self.preupload_lfs_files(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4416, in preupload_lfs_files
_fetch_upload_modes(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 680, in _fetch_upload_modes
hf_raise_for_status(resp)
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 459, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-680ebe2a-7b1626101be5880811f14003;f4c6d7d7-5181-443c-b579-205043a00ee6)
Repository Not Found for url: https://huggingface.co/api/models/nebo1337/GetTheRubberNextG/preupload/main.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Note: Creating a commit assumes that the repo already exists on the Huggingface Hub. Please use `create_repo` if it's not the case.
```
## Training parameters:
- **Dataset**: [nebo1337/GetTheRubberNextG](https://huggingface.co/datasets/nebo1337/GetTheRubberNextG)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 1743
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
vitus9988/Qwen2.5-0.5B-ko-merge | vitus9988 | 2025-04-27T23:21:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:MarkrAI/KOpen-HQ-Hermes-2.5-60K",
"arxiv:2306.01708",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:merge:Qwen/Qwen2.5-0.5B",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:merge:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-31T13:29:24Z | ---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
- Qwen/Qwen2.5-0.5B
library_name: transformers
tags:
- mergekit
- merge
datasets:
- MarkrAI/KOpen-HQ-Hermes-2.5-60K
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-0.5B-ko-merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
* Qwen2.5-0.5B-Instruct-lora-merge
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen2.5-0.5B-Instruct-lora-merge
parameters:
weight: 1
density: 1
- model: Qwen/Qwen2.5-0.5B-Instruct
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-0.5B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
``` |
harryyekini/aceai3 | harryyekini | 2025-04-27T23:21:48Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-27T22:35:02Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
googlefan/cycleqd-test-model | googlefan | 2025-04-27T23:17:35Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2024-12-05T07:19:49Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-0.5B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
|
ikerm11/gemma1b_humanizer_lora | ikerm11 | 2025-04-27T23:10:40Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-04-26T03:56:09Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="ikerm11/gemma1b_humanizer_lora")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("ikerm11/gemma1b_humanizer_lora")
model = AutoModelForCausalLMWithValueHead.from_pretrained("ikerm11/gemma1b_humanizer_lora")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
jncraton/Qwen2.5-0.5B-Instruct-ct2-int8 | jncraton | 2025-04-27T23:08:31Z | 8 | 0 | transformers | [
"transformers",
"chat",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-21T00:28:18Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
base_model_relation: quantized
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
jncraton/Qwen2.5-7B-Instruct-ct2-int8 | jncraton | 2025-04-27T23:07:57Z | 2 | 0 | transformers | [
"transformers",
"chat",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-21T02:56:36Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-7B
tags:
- chat
library_name: transformers
---
# Qwen2.5-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
HoangKhanh1999/HOANGKHANH | HoangKhanh1999 | 2025-04-27T23:04:35Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
Khanhchi1989P/KHANHCHI | Khanhchi1989P | 2025-04-27T23:04:35Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
ChuManh535/CHUMANH | ChuManh535 | 2025-04-27T23:04:34Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
Volko76/Qwen2.5-Coder-7B-GGUF | Volko76 | 2025-04-27T23:04:18Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"autoquant",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-20T10:28:17Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
- autoquant
- gguf
---
# Qwen2.5-Coder-7B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
codesiddhant/t2s-mistral | codesiddhant | 2025-04-27T23:03:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T19:41:42Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: t2s-mistral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t2s-mistral
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0887 | 1.0 | 95 | 0.0831 |
| 0.0708 | 2.0 | 190 | 0.0727 |
| 0.0651 | 3.0 | 285 | 0.0718 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Volko76/Qwen2.5-Coder-3B-GGUF | Volko76 | 2025-04-27T23:03:06Z | 59 | 0 | transformers | [
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"autoquant",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2409.12186",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-20T09:26:17Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
- autoquant
- gguf
---
# Qwen2.5-Coder-3B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
**This repo contains the 3B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
MoyYuan/DeductiveReasoning-lot-all | MoyYuan | 2025-04-27T23:03:04Z | 0 | 0 | null | [
"pytorch",
"bert",
"en",
"dataset:MoyYuan/DeductiveReasoning",
"license:mit",
"region:us"
] | null | 2025-04-27T22:48:04Z | ---
license: mit
datasets:
- MoyYuan/DeductiveReasoning
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/DeductiveReasoning for README information. |
darshankhunt/knee-osteoarthritis-model | darshankhunt | 2025-04-27T23:01:54Z | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
] | null | 2025-04-27T23:00:46Z | ---
license: mit
language:
- en
--- |
hasdal/cd6c6af1-fabe-44b1-8e36-4623cc54cb0b | hasdal | 2025-04-27T23:00:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T22:57:54Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cd6c6af1-fabe-44b1-8e36-4623cc54cb0b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5d5c96ee38e45d35_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5d5c96ee38e45d35_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: hasdal/cd6c6af1-fabe-44b1-8e36-4623cc54cb0b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5d5c96ee38e45d35_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5dc5c8ce-966a-479c-87cd-0307edd3c602
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5dc5c8ce-966a-479c-87cd-0307edd3c602
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cd6c6af1-fabe-44b1-8e36-4623cc54cb0b
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8621 | 0.0011 | 1 | 1.0452 |
| 1.0006 | 0.0032 | 3 | 1.0410 |
| 1.1716 | 0.0064 | 6 | 0.9260 |
| 0.7668 | 0.0096 | 9 | 0.6160 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jncraton/Qwen2.5-3B-Instruct-ct2-int8 | jncraton | 2025-04-27T22:59:20Z | 0 | 0 | transformers | [
"transformers",
"chat",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-21T02:37:48Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B
tags:
- chat
library_name: transformers
---
# Qwen2.5-3B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 3B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
Omaeriahi/interview008 | Omaeriahi | 2025-04-27T22:49:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T22:49:01Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Omaeriahi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Neelectric/OLMo-2-1124-7B-Instruct_SFTv01.04 | Neelectric | 2025-04-27T22:44:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:Neelectric/OpenR1-Math-cn_k12-91k",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:finetune:allenai/OLMo-2-1124-7B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T18:38:47Z | ---
base_model: allenai/OLMo-2-1124-7B-Instruct
datasets: Neelectric/OpenR1-Math-cn_k12-91k
library_name: transformers
model_name: OLMo-2-1124-7B-Instruct_SFTv01.04
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OLMo-2-1124-7B-Instruct_SFTv01.04
This model is a fine-tuned version of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) on the [Neelectric/OpenR1-Math-cn_k12-91k](https://huggingface.co/datasets/Neelectric/OpenR1-Math-cn_k12-91k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Neelectric/OLMo-2-1124-7B-Instruct_SFTv01.04", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/open-r1_SFT/runs/txqvy66b)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Panchovix/Llama-3_1-Nemotron-Ultra-253B-v1-3.25bpw-h6-exl3 | Panchovix | 2025-04-27T22:37:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nemotron-nas",
"text-generation",
"nvidia",
"llama-3",
"pytorch",
"conversational",
"custom_code",
"en",
"arxiv:2503.18908",
"arxiv:2502.00203",
"arxiv:2411.19146",
"license:other",
"autotrain_compatible",
"exl3",
"region:us"
] | text-generation | 2025-04-27T21:59:34Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
---
EXL3 quant for 3.25BPW. It fits into 128GB, with semi-decent context.
# Llama-3.1-Nemotron-Ultra-253B-v1
## Model Overview

Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) which is a derivative of [Meta Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) (AKA the *reference model*). It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. The model supports a context length of 128K tokens. This model fits on a single 8xH100 node for inference.
Llama-3.1-Nemotron-Ultra-253B-v1 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the modelโs memory footprint, enabling larger workloads, as well as reducing the number of GPUs required to run the model in a data center environment. This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff. Furthermore, by using a novel method to vertically compress the model (see details [here](https://arxiv.org/abs/2503.18908)), it also offers a significant improvement in latency.
The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, Chat, and Tool Calling as well as multiple reinforcement learning (RL) stages using Group Relative Policy Optimization (GRPO) algorithms for reasoning, chat, and instruction-following.
This model is ready for commercial use.
For more details on how the model was trained, please see [this blog](https://developer.nvidia.com/blog/build-enterprise-ai-agents-with-advanced-open-nvidia-llama-nemotron-reasoning-models/).

This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here:
- [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1)
- [Llama-3.3-Nemotron-Super-49B-v1](https://huggingface.co/nvidia/Llama-3\_3-Nemotron-Super-49B-v1)
## License/Terms of Use
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) Additional Information: [Llama 3.1 Community License Agreement](https://www.llama.com/llama3\_1/license/). Built with Llama.
**Model Developer:** NVIDIA
**Model Dates:** Trained between November 2024 and April 2025
**Data Freshness:** The pretraining data has a cutoff of 2023 per Llama-3.1-405B-Instruct
### Use Case:
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks.
### Release Date:
2025-04-07
## References
* [\[2502.00203\] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment](https://arxiv.org/abs/2502.00203)
* [\[2411.19146\]Puzzle: Distillation-Based NAS for Inference-Optimized LLMs](https://arxiv.org/abs/2411.19146)
* [\[2503.18908\]FFN Fusion: Rethinking Sequential Computation in Large Language Models](https://arxiv.org/abs/2503.18908)
## Model Architecture
**Architecture Type:** Dense decoder-only Transformer model
**Network Architecture:** Llama-3.1-405B-Instruct, customized through Neural Architecture Search (NAS)
**This model was developed based on Llama-3.1-405B-Instruct <br>
** This model has 253B model parameters. <br>
The model is a derivative of Llama 3.1-405B-Instruct, using Neural Architecture Search (NAS). The NAS algorithm results in non-standard and non-repetitive blocks. This includes the following:
* Skip attention: In some blocks, the attention is skipped entirely, or replaced with a single linear layer.
* Variable FFN: The expansion/compression ratio in the FFN layer is different between blocks.
* FFN Fusion: When several consecutive attention layers are skipped, which can result in a sequence of multiple FFNs, that sequence of FFNs are fused into a smaller number of wider FFN layers.
For each block of the reference model, we create multiple variants providing different tradeoffs of quality vs. computational complexity, discussed in more depth below. We then search over the blocks to create a model which meets the required throughput and memory while minimizing the quality degradation. To recover performance, the model initially undergoes knowledge distillation (KD) for 65 billion tokens. This is followed by a continual pretraining (CPT) phase for 88 billion tokens.
## Intended use
Llama-3.1-Nemotron-Ultra-253B-v1 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported.
## Input
- **Input Type:** Text
- **Input Format:** String
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** Context length up to 131,072 tokens
## Output
- **Output Type:** Text
- **Output Format:** String
- **Output Parameters:** One-Dimensional (1D)
- **Other Properties Related to Output:** Context length up to 131,072 tokens
## Software Integration
- **Runtime Engine:** Transformers
- **Recommended Hardware Microarchitecture Compatibility:**
- NVIDIA Hopper
- NVIDIA Ampere
- **Preferred Operating System(s):** Linux
## Model Version
1.0 (4/7/2025)
## Quick Start and Usage Recommendations:
1. Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt
2. We recommend setting temperature to \`0.6\`, and Top P to \`0.95\` for Reasoning ON mode
3. We recommend using greedy decoding (temperature 0\) for Reasoning OFF mode
4. We do not recommend to add additional system prompts besides the control prompt, all instructions should be put into user query
5. We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required
You can try this model out through the preview API, using this link: [Llama-3\_1-Nemotron-Ultra-253B-v1](https://build.nvidia.com/nvidia/llama-3\_1-nemotron-ultra-253b-v1).
### Use It with Transformers
See the snippet below for usage with [Hugging Face Transformers](https://huggingface.co/docs/transformers/main/en/index) library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below
We recommend using the *transformers* package with version 4.48.3.
Example of reasoning on:
```py
import torch
import transformers
model_id = "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
**model_kwargs
)
thinking = "on"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
Example of reasoning off:
```py
import torch
import transformers
model_id = "nvidia/Llama-3_1-Nemotron-ULtra-253B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
do_sample=False,
**model_kwargs
)
thinking = "off"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
### Use It with vLLM
```
pip install vllm==0.8.3
```
An example on how to serve with vLLM:
```
python3 -m vllm.entrypoints.openai.api_server \
--model "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" \
--trust-remote-code \
--seed=1 \
--host="0.0.0.0" \
--port=5000 \
--served-model-name "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" \
--tensor-parallel-size=8 \
--max-model-len=32768 \
--gpu-memory-utilization 0.95 \
--enforce-eager
```
## Inference:
**Engine:**
- Transformers
**Test Hardware:**
- BF16:
- 8x NVIDIA H100-80GB
- 4x NVIDIA B100
- FP 8
- 4x NVIDIA H100-80GB
## Training and Evaluation Datasets
## Training Datasets
A large variety of training data was used for the knowledge distillation phase before post-training pipeline, 3 of which included: FineWeb, Buzz-V1.2, and Dolma.
The data for the multi-stage post-training phases is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model.
Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both reasoning on and off modes, to train the model to distinguish between two modes. This model was improved with Qwen.
We have released our [Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) to promote openness and transparency in model development and improvement.
**Data Collection for Training Datasets:**
- Hybrid: Automated, Human, Synthetic
**Data Labeling for Training Datasets:**
- Hybrid: Automated, Human, Synthetic
## Evaluation Datasets
We used the datasets listed in the next section to evaluate Llama-3.1-Nemotron-Ultra-253B-v1.
Data Collection for Evaluation Datasets:
- Hybrid: Human/Synthetic
Data Labeling for Evaluation Datasets:
- Hybrid: Human/Synthetic/Automatic
## Evaluation Results
*These results contain both Reasoning On, and Reasoning Off. We recommend using temperature=\`0.6\`, top\_p=\`0.95\` for Reasoning On mode, and greedy decoding for Reasoning Off mode. All evaluations are done with 32k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.*
> NOTE: Where applicable, a Prompt Template will be provided. While completing benchmarks, please ensure that you are parsing for the correct output format as per the provided prompt in order to reproduce the benchmarks seen below.
### GPQA
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 56.60 |
| Reasoning On | 76.01 |
User Prompt Template:
```
"What is the correct answer to this question: {question}\nChoices:\nA. {option_A}\nB. {option_B}\nC. {option_C}\nD. {option_D}\nLet's think step by step, and put the final answer (should be a single letter A, B, C, or D) into a \boxed{}"
```
### AIME25
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 16.67 |
| Reasoning On | 72.50 |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
### BFCL V2 Live
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 73.62 |
| Reasoning On | 74.10 |
User Prompt Template:
```
You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function,
also point it out. You should only return the function call in tools call sections.
If you decide to invoke any of the function(s), you MUST put it in the format of <TOOLCALL>[func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]</TOOLCALL>
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.
<AVAILABLE_TOOLS>{functions}</AVAILABLE_TOOLS>
{user_prompt}
```
### LiveCodeBench (20240801-20250201)
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 29.03 |
| Reasoning On | 66.31 |
User Prompt Template (without starter code):
````
"You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
Question: {prompt}
Read the inputs from stdin solve the problem and write the answer to stdout (do not directly test on the sample inputs). Enclose your code within delimiters as follows. Ensure that when the python program runs, it reads the inputs, runs the algorithm and writes output to STDOUT.
```python
# YOUR CODE HERE
```
````
User Prompt Template (with starter code):
````
You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
Question: {prompt}
You will use the following starter code to write the solution to the problem and enclose your code within delimiters.
```python
{starter_code}
```
````
### IFEval
| Reasoning Mode | Strict:Instruction |
|--------------|------------|
| Reasoning Off | 88.85 |
| Reasoning On | 89.45 |
### MATH500
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 80.40 |
| Reasoning On | 97.00 |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY_and_SECURITY.md), and [Privacy](./PRIVACY.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). |
Trarose/my_awesome_model | Trarose | 2025-04-27T22:36:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T21:38:57Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2324
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2228 | 1.0 | 1563 | 0.2040 | 0.9226 |
| 0.1468 | 2.0 | 3126 | 0.2324 | 0.9315 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
jessicata/Qwen2.5-0.5B-Q8_0-GGUF | jessicata | 2025-04-27T22:34:26Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-25T01:41:42Z | ---
base_model: Qwen/Qwen2.5-0.5B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# jessicata/Qwen2.5-0.5B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B`](https://huggingface.co/Qwen/Qwen2.5-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jessicata/Qwen2.5-0.5B-Q8_0-GGUF --hf-file qwen2.5-0.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jessicata/Qwen2.5-0.5B-Q8_0-GGUF --hf-file qwen2.5-0.5b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jessicata/Qwen2.5-0.5B-Q8_0-GGUF --hf-file qwen2.5-0.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jessicata/Qwen2.5-0.5B-Q8_0-GGUF --hf-file qwen2.5-0.5b-q8_0.gguf -c 2048
```
|
mlx-community/CodeLlama-13b-hf-6bit-mlx | mlx-community | 2025-04-27T22:29:35Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"code",
"base_model:codellama/CodeLlama-13b-hf",
"base_model:quantized:codellama/CodeLlama-13b-hf",
"license:llama2",
"6-bit",
"region:us"
] | text-generation | 2025-04-27T22:20:11Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
base_model: codellama/CodeLlama-13b-hf
library_name: mlx
---
# mlx-community/CodeLlama-13b-hf-6bit-mlx
This model [mlx-community/CodeLlama-13b-hf-6bit-mlx](https://huggingface.co/mlx-community/CodeLlama-13b-hf-6bit-mlx) was
converted to MLX format from [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-13b-hf-6bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MayBashendy/ellipse_SDP_all_binary_multilingual_e5_small_lr3e-05_targ0_epoch500 | MayBashendy | 2025-04-27T22:27:20Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-27T22:26:59Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
mlfoundations-dev/c1_science_0d_4s_0.3k | mlfoundations-dev | 2025-04-27T22:24:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:18:09Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_0d_4s_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_0d_4s_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_0d_4s_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ArtusDev/remnant-ep2-mrg | ArtusDev | 2025-04-27T22:24:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T22:17:37Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-32B-0414
## Introduction
The GLM family welcomes new members, the **GLM-4-32B-0414** series models, featuring 32 billion parameters. Its performance is comparable to OpenAIโs GPT series and DeepSeekโs V3/R1 series. It also supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including substantial reasoning-type synthetic data. This lays the foundation for subsequent reinforcement learning extensions. In the post-training stage, we employed human preference alignment for dialogue scenarios. Additionally, using techniques like rejection sampling and reinforcement learning, we enhanced the modelโs performance in instruction following, engineering code, and function calling, thus strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in engineering code, Artifact generation, function calling, search-based Q&A, and report generation. In particular, on several benchmarks, such as code generation or specific Q&A tasks, GLM-4-32B-Base-0414 achieves comparable performance with those larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with deep thinking capabilities. This was developed based on GLM-4-32B-0414 through cold start, extended reinforcement learning, and further training on tasks including mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During training, we also introduced general reinforcement learning based on pairwise ranking feedback, which enhances the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with rumination capabilities (against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model is capable of deeper and longer thinking to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). Z1-Rumination is trained through scaling end-to-end reinforcement learning with responses graded by the ground truth answers or rubrics and can make use of search tools during its deep thinking process to handle complex tasks. The model shows significant improvements in research-style writing and complex tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed all the aforementioned techniques to train a small model (9B). GLM-Z1-9B-0414 exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is top-ranked among all open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Showcase
### Animation Generation
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-Z1-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<video src="https://github.com/user-attachments/assets/849ff9fd-b54d-4c74-9ee5-3412e1a09e32"
style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<video src="https://github.com/user-attachments/assets/8dccdb9d-cc44-4732-b438-74a4e3cb9dfb"
style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Use HTML to simulate the scenario of a small ball released from the center of a rotating hexagon. Consider the collision between the ball and the hexagon's edges, the gravity acting on the ball, and assume all collisions are perfectly elastic. (Prompt translated from Chinese)
</div>
</td>
</tr>
</table>
### Web Design
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/bd9c1fc1-c784-4e8f-9c76-5f7389a715f1"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Design a drawing board that supports custom function plotting, allowing adding and deleting custom functions, and assigning colors to functions. (Prompt translated from Chinese)
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/7ad12d52-9229-4278-8d1b-ffbf43e99070"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Design a UI for a mobile machine learning platform, which should include interfaces for training tasks, storage management, and personal statistics. The personal statistics interface should use charts to display the user's resource usage over a period. Use Tailwind CSS to style the page, and display these 3 mobile interfaces tiled on a single HTML page. (Prompt translated from Chinese) </div>
</td>
</tr>
</table>
### SVG Generation
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/9407e4c1-1876-4ab5-838c-839836fb418a"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Create a misty Jiangnan scene using SVG. (Prompt translated from Chinese)
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/bcce8c5a-cedf-45c8-b666-ddb023d5b49c"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Use SVG to illustrate the training process of an LLM. (Prompt translated from Chinese) </div>
</td>
</tr>
</table>
### Search-Based Writing
For search-based writing tasks, we use the following system prompt to have the model respond based on search results:
```
่ฏทๆ นๆฎๆ็ปๆ็ดข่ฟๅ็ปๆๅฏน็จๆท้ฎ้ข่ฟ่กไฝ็ญใ
## ๆณจๆ
1. ๅ
ๅๅฉ็จๅๆด็ๆถ้ๅฐ็ไฟกๆฏ๏ผ่ไธๆฏ็ฎๅ็ๅคๅถ็ฒ่ดด๏ผ็ๆ็ฌฆๅ็จๆท่ฆๆฑไธๆๆทฑๅบฆ็ไธไธ็ญๆกใ
2. ๆๆไพไฟกๆฏๅ
ๅ็ๆ
ๅตไธ๏ผไฝ ็ๅ็ญ้ๅฐฝๅฏ่ฝๅปถ้ฟ๏ผไป็จๆทๆๅพ่งๅบฆๅบๅ๏ผๆไพๅ
ทๆ่ถณๅคไฟกๆฏ้ๅๅค่งๅบฆ็ๅๅคใ
3. ๅฆๅค๏ผๅนถ้ๆๆ็ๆ็ดข็ปๆ้ฝไธ็จๆท้ฎ้ขๅฏๅ็ธๅ
ณ๏ผ่ฏทไป็ป็็ๅซใ็ญ้ๅๅฉ็จใ
4. ๅฎข่ง็ฑป้ฎ็ญ็็ญๆก้ๅธธ้ๅธธ็ฎ็ญ๏ผไฝ ๅฏไปฅ้ๅฝ่กฅๅ
ไธๅฐไธคๅฅ็ธๅ
ณไฟกๆฏ๏ผไปฅไธฐๅฏๅ
ๅฎนใ
5. ่ฏท็กฎไฟไฝ ็ๅๅคๆ ผๅผ็พ่งใๅฏ่ฏปๆงๅผบใๅฏนไบๅคๅฎไฝๅฏนๆฏๆๅไธพ๏ผๅ็จๅ่กจๆ ผๅผๆฅๅธฎๅฉ็จๆทๆดๅฅฝ็็่งฃไฟกๆฏใ
6. ้ค้็จๆท่ฆๆฑ๏ผๅฆๅไฝ ๅ็ญ็่ฏญ่จ่ฏทไบ็จๆทๆ้ฎ่ฏญ่จไฟๆไธ่ดใ
7. ๅจ้ๅฝๆ
ๅตไธๅจๅฅๅญๆซๅฐพไฝฟ็จไพๅฆ:ใ0โ sourceใ็ๆ ผๅผๅผ็จๆ็ดข็ปๆใ
```
When using, you can obtain search results through methods such as `RAG` or `WebSearch`, and wrap them in `observation`, for example:
```json
[
{
"role": "user",
"content": "Explore the common characteristics of children's literature, with a focus on its narrative techniques and thematic tendencies. This includes narrative techniques: common approaches in children's literature such as first-person, third-person, omniscient narrator, and interactive narration, and their influence on young readers. It also includes thematic tendencies: recurring themes in children's literature such as growth, adventure, friendship, and family, with an analysis of how these themes impact children's cognitive and emotional development. Additionally, other universal features such as the use of personification, repetitive language, symbolism and metaphor, and educational value should be considered. Please provide a detailed analytical report based on academic research, classic examples of children's literature, and expert opinions."
},
{
"role": "observation",
"content": "ใ{id}โ {title}โ {url}ใ\n{content}"
},
...
]
```
For the above prompt, we use an internal or external search model to obtain the search results. Using the format shown above, we can generate the following analysis report:
<div style="height: 400px; width: 100%; overflow: auto; border: 5px solid #ddd; padding: 20px;">
# Analysis Report on Common Characteristics of Children's Literature
Children's literature, as a literary genre specifically created for children, possesses unique artistic features and expressive techniques. This report will comprehensively analyze the common characteristics of children's literature from three dimensions: narrative methods, thematic tendencies, and other universal features, integrating academic research, classic examples of children's literature, and expert viewpoints.
## Narrative Methods and Their Impact on Child Readers
The narrative methods of children's literature exhibit diverse characteristics, including first-person, third-person, narration, and interactive storytelling. These different narrative perspectives and methods profoundly influence children's reading experiences and cognitive development.
**First-person narration** is common in children's literature, unfolding the story from the perspective of a child protagonist. This narrative style bridges the gap between the reader and the story, making it easier for child readers to identify with the characters and develop emotional resonance. For example, many coming-of-age novels use first-person narration, allowing the protagonist to directly recount their experiences and feelings, making readers feel as though they are experiencing the joys and sorrows of growth alongside the protagonist. This narrative perspective lends the work a strong sense of authenticity and intimacy, helping to cultivate children's empathyใ1โ sourceใ.
**Third-person narration** offers a broader perspective, allowing the author to flexibly switch between different characters' viewpoints and present richer layers of the story. In children's literature, third-person omniscient narration enables the author to control the narrative pace, revealing or concealing information as needed to guide children's attention. At the same time, third-person narration facilitates direct dialogue between the author and the reader, conveying values or explaining complex concepts through narration. This narrative method positively influences children's macro-thinking and comprehensive understandingใ1โ sourceใ.
**Narration (authorial intrusion)** is a unique narrative technique in children's literature, where the author directly appears as the "storyteller," explaining the background, commenting on characters, or posing questions to the reader. This technique is particularly common in classic fairy tales, such as the opening lines of *Andersen's Fairy Tales*: "Once, there was a child..." Narration helps children understand the story's context, fills cognitive gaps, and conveys the author's educational intent. Research shows that appropriate authorial intrusion aids children in grasping the story's structure and improving reading comprehensionใ5โ sourceใ.
**Interactive storytelling** is a new trend in contemporary children's literature, especially prominent in the digital media era. Interactive storytelling breaks the traditional unidirectional author-reader relationship, encouraging child readers to participate in the story's creation, such as by choosing plot directions, character dialogues, or endings. This participatory reading enhances children's sense of agency and fosters decision-making skills and creative thinking. For example, some children's reading apps incorporate interactive elements, allowing children to influence the story's development through clicks, drag-and-drop actions, and other operations, thereby gaining a stronger sense of immersion and achievementใ6โ sourceใ. Interactive storytelling transforms children from passive information recipients into active meaning-makers, uniquely contributing to the development of their subjectivity.
*Table: Common Narrative Methods in Children's Literature and Their Effects*
| **Narrative Method** | **Characteristics** | **Impact on Child Readers** | **Classic Examples** |
|----------------------|--------------------|----------------------------|---------------------|
| **First-Person** | Told from the child protagonist's perspective | Enhances immersion, fosters empathy | *Charlotte's Web*, *The Straw House* |
| **Third-Person** | Omniscient or limited perspective | Expands horizons, develops comprehensive understanding | *Harry Potter* series |
| **Narration** | Direct authorial intrusion into the narrative | Aids comprehension, conveys values | *Andersen's Fairy Tales* |
| **Interactive** | Encourages reader participation in creation | Cultivates agency and creative thinking | Children's interactive reading apps |
Notably, the narrative methods of children's literature are often closely intertwined with the **childhood perspective**. The childhood perspective does not necessarily mean the narrator must be a child but refers to the work's ability to describe the world to the greatest extent from a child's heart, expressing their inner psychology and external circumstancesใ2โ sourceใ. Through the childhood perspective, readers can embark on a spiritual journey with a child's mindset, a narrative strategy that creates a strong sense of realism, allowing child readers to achieve emotional identification and cognitive resonance during the reading processใ1โ sourceใ. The use of the childhood perspective gives the work's language a perceptual and naive quality, often with a prose-like and spatial structure, artistic features that align with children's cognitive characteristics and aid their acceptance and understandingใ2โ sourceใ.
## Thematic Tendencies and Their Impact on Children's Cognitive and Emotional Development
The thematic choices in children's literature exhibit distinct tendencies, with common themes including growth, adventure, friendship, and family. These themes not only form the core content of children's literature but also subtly influence children's cognitive development and emotional shaping.
**The theme of growth** is one of the central motifs in children's literature. Growth narratives are regarded as the artistic lifeblood of children's literature, focusing on depicting the pivotal moments of rapid psychological development in children, particularly the awakening and establishment of self-awarenessใ3โ sourceใ. Growth literature typically includes three elements: an artistic portrayal of the self-awareness construction process in growing adolescents, a developmental story with logical propulsion, and the presentation of the protagonist's spiritual trials and quest for directionใ3โ sourceใ. By reading growth-themed works, child readers can indirectly experience the confusion and breakthroughs of growing up and understand the formation of self-identity. Classics such as Astrid Lindgren's *Pippi Longstocking* and Cao Wenxuan's *The Straw House* vividly depict children's psychological growth trajectories in specific environments. Research indicates that growth-themed literary works help children build a positive self-concept and develop the courage and resilience to face challenges, positively contributing to their psychological developmentใ9โ sourceใ.
**The theme of adventure** holds an important place in children's literature, satisfying children's curiosity about exploring the unknown. Adventure stories often feature unusual settings and unknown challenges, with the protagonist growing through overcoming difficulties. Classics like *Robinson Crusoe* and *The Adventures of Tom Sawyer* attract child readers with thrilling plots while conveying the importance of qualities such as courage, wisdom, and perseverance. The impact of adventure themes on children's cognitive development mainly lies in expanding their imaginative space and fostering problem-solving skills. In adventure stories, children must analyze situations, make plans, and respond to unexpected events alongside the protagonist, a process that exercises their logical thinking and adaptabilityใ14โ sourceใ. At the same time, the unfamiliar environments and novel experiences in adventure stories stimulate children's curiosity and desire to learn, laying the foundation for cultivating an exploratory spirit. As experts point out, excellent children's literature should be grounded in reality, rich in depth, and generate significant inspiration andๆๆๅ, guiding children to comprehensively understand the worldใ14โ sourceใ.
**The theme of friendship** is equally prevalent in children's literature, reflecting children's emphasis on peer relationships. Friendship and love are regarded as humanity's most precious qualities, often depicted in children's literature as beacons in the night, guiding children toward the futureใ9โ sourceใ. Friendship stories typically revolve around interactions between children, portraying positive behaviors such as sharing, cooperation, and understanding. Examples include the genuine friendships among the children at Tomoe Gakuen in *Totto-Chan: The Little Girl at the Window* and the promise and mutual aid between Wilbur and Charlotte in *Charlotte's Web*. These stories help child readers recognize the value of friendship and learn how to build and maintain interpersonal relationships. Research shows that children need peer support during their growth, as friends provide crucial emotional anchors, offering the greatest emotional support and comfort in unfamiliar environmentsใ16โ sourceใ. By reading friendship-themed works, children can learn social skills, develop empathy, and cultivate a spirit of cooperation, qualities essential for their social developmentใ17โ sourceใ.
**The theme of family** is an indispensable subject in children's literature, depicting the emotional bonds and interaction patterns among family members. As the primary setting for children's earliest socialization, the family atmosphere and parenting styles profoundly impact children's mental healthใ10โ sourceใ. Family stories in children's literature often focus on parent-child relationships, sibling bonds, and other dynamics, such as Alice's relationship with her sister in *Alice's Adventures in Wonderland* and the Little Prince's interactions with the rose in *The Little Prince*. These stories help children understand the responsibilities and expectations of family roles and learn to handle conflicts within the family. Research indicates that a positive family atmosphere and parental support promote the development of children's positive psychological traits, while adverse family environments and parenting behaviors negatively affect their mental healthใ10โ sourceใใ11โ sourceใ. By reading family-themed works, children can gain emotional support, learn skills for managing family relationships, and establish healthy family values.
*Table: Common Themes in Children's Literature and Their Impact on Child Development*
| **Theme Type** | **Content Representation** | **Impact on Cognitive Development** | **Impact on Emotional Development** | **Classic Examples** |
|---------------|---------------------------|-------------------------------------|-------------------------------------|---------------------|
| **Growth** | Awakening of self-awareness, psychological trials and breakthroughs | Establishes self-concept, fosters problem-solving skills | Shapes positive self-identity, enhances psychological resilience | *The Straw House*, *Pippi Longstocking* |
| **Adventure** | Exploring the unknown, overcoming challenges | Expands imaginative space, exercises logical thinking | Cultivates courage and perseverance | *Robinson Crusoe*, *The Adventures of Tom Sawyer* |
| **Friendship** | Peer interactions, mutual aid and cooperation | Learns social skills, understands interpersonal dynamics | Develops empathy, builds a sense of belonging | *Charlotte's Web*, *Totto-Chan: The Little Girl at the Window* |
| **Family** | Parent-child relationships, sibling bonds | Understands social roles, learns communication skills | Gains emotional support, establishes secure attachments | *Alice's Adventures in Wonderland*, *The Little Prince* |
Regarding thematic choices, children's literature researcher Zhu Ziqiang proposed the famous "Three Major Motifs" theory, categorizing children's literary works into "the motif of love," "the motif of the mischievous child," and "the motif of nature"ใ8โ sourceใ. The motif of love focuses on emotional connections between children and adults or peers; the motif of the mischievous child portrays children's free-spirited nature; and the motif of nature emphasizes the harmonious relationship between children and the natural environment. These three motifs reflect the richness of the children's world from different angles, providing diverse emotional experiences and cognitive frameworks for children. Notably, these themes do not exist in isolation; outstanding works often organically integrate multiple themes. For example, the *Harry Potter* series incorporates growth, friendship, adventure, and family elements, presenting child readers with a multidimensional spiritual world.
## Other Universal Features and Their Artistic Expression
In addition to narrative methods and thematic tendencies, children's literature exhibits a series of universal artistic features, including anthropomorphism, repetitive language, symbolism and metaphor, and educational significance. These features collectively constitute the unique aesthetic style of children's literature, subtly influencing children's cognitive development and aesthetic cultivation.
**Anthropomorphism** is one of the most distinctive artistic features of children's literature. In children's literary works, animals, plants, and even inanimate objects are often endowed with human thoughts, emotions, and behaviors, greatly enhancing the story's fun and imagination. Research shows that anthropomorphism is a frequently used technique by children's literature creators to attribute human characteristics to animals, enabling them to possess perception and communication abilitiesใ19โ sourceใ. Through anthropomorphism, children can more easily understand abstract concepts and moral principles, as anthropomorphic characters translate complex ideas into familiar emotional and behavioral patterns. For example, in scientific fairy tales, anthropomorphic characters can help explain scientific principles, making abstract concepts tangibleใ18โ sourceใ. Anthropomorphism not only enriches the narrative techniques of children's literature but also provides children with a unique perspective for understanding the relationship between humans and nature. It is worth noting that excessive anthropomorphism may affect children's accurate understanding of the animal world, so modern children's literature pays more attention to balancing the natural attributes of characters with human characteristics when employing anthropomorphic techniquesใ19โ sourceใ.
**Repetitive language** is extremely common in children's literature, a linguistic feature rooted in oral traditions originally intended to aid memory and disseminationใ20โ sourceใ. In children's literature, the repetitive use of words, phrases, or sentences serves multiple functions: constructing the story's framework, emphasizing key information, creating rhythm and musicality, and training children's vocabulary skills. For example, in *The Very Hungry Caterpillar*, the author repeatedly uses phrases like "On Monday, he ate one apple. On Tuesday, he ate two pears..." This not only builds the story's structure but also helps children learn numbers and days of the week. Repetitive structures also aid children in developing an awareness of language patterns during the early stages of language acquisition, fostering a sense of language and memory skillsใ21โ sourceใ. Research indicates that repetitive language in children's literature promotes children's language acquisition, helping them master vocabulary and syntactic rules. At the same time, this linguistic feature enhances the story's participatory nature, as children can often join in reciting the repetitive parts, gaining a sense of achievement.
**Symbolism and metaphor** are common expressive techniques in children's literature, conveying abstract meanings through concrete imagery. Symbolism uses specific objects to represent abstract concepts or emotions, while metaphor connects two different things through comparison, creating new meanings. In children's literature, symbolism and metaphor are usually presented in a simple and clear manner, avoiding overly complex interpretations. For example, the character configurations and metaphorical connotations in *The Wizard of Oz* are thought-provoking, as these characters not only breathe life into the story but also convey profound life philosophies through their symbolic meaningsใ24โ sourceใ. Symbolism and metaphor in children's literature are often related to themes such as growth, friendship, and courage, helping children understand abstract concepts through concrete and figurative expressions. Research shows that appropriate metaphors can promote children's cognitive development, stimulating their imagination and creativityใ23โ sourceใ. As children grow older, their ability to understand symbolism and metaphor gradually improves, providing children's literature with multi-layered meaning spaces.
**Educational significance** is an indispensable component of children's literature, which inherently carries the gene of children's educationใ22โ sourceใ. Excellent children's literary works simultaneously possess entertainment and educational functions, not only helping children understand the objective world, enrich their inner emotions, and acquire life wisdom but also cultivating their perception, aesthetic sensibility, thinking skills, and creativityใ15โ sourceใ. Educational significance in children's literature is often not directly presented through preaching but naturally revealed through the storyline and characters' fates. For example, many classic fairy tales convey the importance of qualities such as bravery and honesty through the protagonist's adventurous experiences, while popular science books introduces scientific knowledge through interesting plots and characters. Experts point out that children's literature writers should shoulder the importantence of education, incorporating care for children's mental growth into their worksใ22โ sourceใ. It is worth noting that the educational significance of children's literature should respect children's receptive abilities, avoiding excessive preaching or moral indoctrination, and instead naturally influencing children's values and behaviors through artistic appeal.
**Storytelling** is the most basic and essential feature of children's literature. Children's perceptual, imagery-driven, and novelty-seeking cognitive characteristics and receptive psychology further determine that "storytelling" is an indispensable ontological feature of children's literatureใ25โ sourceใ. Engaging plots are the most crucial aspect of children's literary works because, compared to adults, children's understanding of things relies mainly on intuition, and plots play a key role in guiding children's comprehension of storiesใ26โ sourceใ. The storytelling quality of children's literature is reflected in multiple aspects: clear cause-and-effect relationships, Compact narrative rhythm and satisfying endings. These elements work together to immerse children in the story world, providing emotional satisfaction and cognitive inspiration. As researchers have noted, plots must be performed by specific characters in specific situations to convey individual experiences in unique space-time environmentsใ7โ sourceใ. In children's literature, storytelling is not merely an artistic technique but a bridge connecting children to the world. Through stories, children can safely experience various life scenarios and learn methods for challenges.
In terms of **language features**, children's literature typically adopts a concise, clear, and vivid language style, avoiding complex sentence structures and abstract vocabulary. This linguistic characteristic aligns with children's cognitive development levels, facilitating their understanding and acceptance. At the same time, the language of children's literature is often rich in rhythm and musicality, enhancing readability and memorability through techniques such as rhyming and repetition. For example, Michael Rosen's children's literary works extensively employ repetitive structures and rhymes, a language usage that helps children develop an awareness of language patterns during the early stages of language acquisitionใ21โ sourceใ. The language of children's literature also often includes rich sensory descriptions and emotional expressions, stimulating children's imagination through concrete and tangible imagery. Scholar Jay Davis's research shows that the interactive use of language in children's literature can influence children's language habits and promote their language developmentใ21โ sourceใ.
In summary, these universal features of children's literature collectively constitute its unique artistic charm and educational value. Anthropomorphism and symbolism expand children's imaginative spaces, repetitive language and storytelling promote language acquisition and cognitive development, and the natural integration of educational significance achieves the artistic effect of "teaching through entertainment." These features do not exist in isolation but are interwoven and organically unified, collectively serving the comprehensive development of child readers.
## Conclusion
Through a systematic analysis of the narrative methods, thematic tendencies, and other universal features of children's literature, we can draw the following conclusions: As a special literary genre, the creation and reception of children's literature follow unique rules. In terms of narrative methods, children's literature flexibly employs various techniques such as first-person, third-person, narration, and interactive storytelling to adapt to children's cognitive characteristics and receptive psychology. Among these, the use of the childhood perspective is particularly important, as it enhances the work's sense of realism and intimacy, enabling child readers to develop emotional resonanceใ1โ sourceใใ2โ sourceใ. In terms of thematic choices, growth, adventure, friendship, and family constitute the main content of children's literature. These themes not only satisfy children's curiosity and desire to explore but also subtly influence their cognitive development and emotional shapingใ3โ sourceใใ9โ sourceใ. Other universal features such as anthropomorphism, repetitive language, symbolism, and educational significance collectively form the unique artistic style and educational value of children's literatureใ18โ sourceใใ20โ sourceใใ24โ sourceใ.
These characteristics of children's literature do not exist in isolation but are interconnected and organically unified. For example, adventure themes are often combined with third-person omniscient narration to attract child readers through compact plots and vivid descriptions; friendship themes frequently employ first-person narration to enhance emotional resonance; and anthropomorphism is commonly found in nature-themed works, helping children understand the relationship between humans and nature. These features collectively serve the comprehensive development of child readers, meeting their entertainment needs while promoting their cognitive growth and emotional maturity.
From an academic research perspective, children's literature studies should emphasize the application of narrative theory, as narrative theory focuses more on the "how" of storytellingโnarrative formโwhich aligns closely with the research focus of children's literatureใ0โ sourceใ. At the same time, cognitive research methods provide new perspectives for children's literature studies. By combining cognitive science with literary theory, we can gain a deeper understanding of how children's literature influences children's thinking and cognitive developmentใ4โ sourceใ. Future research should continue to explore the application of these theoretical methods in children's literature studies while paying attention to the intersection and integration of children's literature with emerging fields such as digital media and interdisciplinary education.
From a creative practice perspective, children's literature writers should fully grasp children's cognitive characteristics and emotional needs, incorporating growth Care and educational wisdom into their work As experts have pointed out, excellent children's literary works should be grounded in reality, rich in depth, and generate significant infection and infectivity, guiding children to comprehensively understand the world and correctly recognize themselves and societyใ14โ sourceใ. At the same time, children's literature Creativity should keep pace with the times, addressing new problems and challenges faced by contemporary children, such as media literacy in the digital age and identity formation in multicultural contexts, to provide targeted spiritual nourishment for children.
From an educational application perspective, children's literature should fully leverage its unique role in children's mental growth. Through carefully designed reading activities, teachers and parents can help children deeply understand the themes and meanings in works, guiding them to connect reading experiences with real life. Research shows that children's literature plays an increasingly important role in language education, the construction of a reading society, and children's mental growthใ22โ sourceใ. Therefore, children's literature should be incorporated as an important component of school and family education, promoting children's cognitive development and emotional maturity through activities such as reading sharing, role-playing, and creative writing.
In summary, as a unique art form and educational medium, the common characteristics of children's literature constitute an organic whole, collectively serving the comprehensive development of child readers. By deeply understanding these features and their mechanisms of influence, we can better create, research, and apply children's literature, providing high-quality spiritual nourishment for children's healthy growth. Future children's literature research should continue to deepen theoretical exploration, expand research methods, and strengthen interdisciplinary collaboration to address the ever-changing needs of children and the challenges of the times, promoting the continuous development of children's literature.
</div>
### Function Call
GLM-4-32B-0414 supports calling external tools in JSON format. This can be done via HuggingFace Transformers, vLLM, or sgLang.
The message format for tool calling is as follows:
```json=
{
"role": "asssitant",
"metadata": function_name,
"content": json.dumps(call_arguments, ensure_ascii=False)
}
```
The message format for tool execution results is as follows:
```json=
{
"role": "observation",
"content": json.dumps(tool_response, ensure_ascii=False) if not isinstance(tool_response, str) else tool_response
}
```
The following example demonstrates the process of GLM-4-32B-0414 calling a tool and generating a final response using HuggingFace Transformers.
```python
import json
import re
import ast
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-32B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
def is_function_call(single_message):
"""Determine whether the current system message is a function call."""
pattern = re.compile(r'([^\n`]*?)\n({.*?})(?=\w*\n|$)', re.DOTALL)
matches = pattern.findall(single_message)
if not matches:
return False
func_name, args_str = matches[0]
func_name = func_name.strip()
try:
parsed_args = json.loads(args_str)
except json.JSONDecodeError:
try:
parsed_args = ast.literal_eval(args_str)
except:
return False
return {"name": func_name, "arguments": parsed_args}
def realtime_aqi(city):
"""Weather Query Tool"""
if 'ๅไบฌ' in city.lower():
return json.dumps({'city': 'ๅไบฌ', 'aqi': '10', 'unit': 'celsius'}, ensure_ascii=False)
elif 'ไธๆตท' in city.lower():
return json.dumps({'city': 'ไธๆตท', 'aqi': '72', 'unit': 'fahrenheit'}, ensure_ascii=False)
else:
return json.dumps({'city': city, 'aqi': 'unknown'}, ensure_ascii=False)
def build_system_prompt(tools):
"""Construct system prompt based on the list of available tools."""
if tools is None:
tools = []
value = "# ๅฏ็จๅทฅๅ
ท"
contents = []
for tool in tools:
content = f"\n\n## {tool['function']['name']}\n\n{json.dumps(tool['function'], ensure_ascii=False, indent=4)}"
content += "\nๅจ่ฐ็จไธ่ฟฐๅฝๆฐๆถ๏ผ่ฏทไฝฟ็จ Json ๆ ผๅผ่กจ็คบ่ฐ็จ็ๅๆฐใ"
contents.append(content)
value += "".join(contents)
return value
tools = [
{
"type": "function",
"function": {
"name": "realtime_aqi",
"description": "ๅคฉๆฐ้ขๆฅใ่ทๅๅฎๆถ็ฉบๆฐ่ดจ้ใๅฝๅ็ฉบๆฐ่ดจ้๏ผPM2.5๏ผPM10ไฟกๆฏ",
"parameters": {
"type": "object",
"properties": {
"city": {
"description": "ๅๅธๅ"
}
},
"required": [
"city"
]
}
}
}
]
system_prompt = build_system_prompt(tools)
message = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "ๅไบฌๅไธๆตทไปๅคฉ็ๅคฉๆฐๆ
ๅต"}
]
print(f"User Message: {message[-1]['content']}")
while True:
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 1024,
"do_sample": True,
}
out = model.generate(**generate_kwargs)
generate_resp = tokenizer.decode(out[0][inputs["input_ids"].shape[1]:-1], skip_special_tokens=False)
stop_sequence = tokenizer.decode(out[0][-1:], skip_speical_tokens=False)
if stop_sequence == "<|user|>":
print(f"Assistant Response: {generate_resp.strip()}")
break
function_calls = []
for m in generate_resp.split("<|assistant|>"):
fc_decode = is_function_call(m.strip())
if fc_decode:
message.append({"role": "assistant", "metadata": fc_decode['name'], "content": json.dumps(fc_decode['arguments'], ensure_ascii=False)})
print(f"Function Call: {fc_decode}")
function_calls.append(fc_decode)
else:
message.append({"role": "assistant", "content": m})
print(f"Assistant Response: {m.strip()}")
for fc in function_calls:
function_response = realtime_aqi(
city=fc["arguments"]["city"],
)
print(f"Function Response: {function_response}")
message.append({"role": "observation", "content": function_response})
```
## Evaluation Results
<div style="text-align: center;">
<img src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-32B.png" style="width: 80%;" />
</div>
### GLM-4-0414 Series
| ๆจกๅ | IFEval | BFCL-v3 (Overall) | BFCL-v3 (MultiTurn) | TAU-Bench (Retail) | TAU-Bench (Airline) | SimpleQA | HotpotQA |
| ---------------- | ------ | ----------------- | ------------------- | ------------------ | ------------------- | -------- | -------- |
| Qwen2.5-Max | 85.6 | 50.9 | 30.5 | 58.3 | 22.0 | 79.0 | 52.8 |
| GPT-4o-1120 | 81.9 | 69.6 | 41.0 | 62.8 | 46.0 | 82.8 | 63.9 |
| DeepSeek-V3-0324 | 83.4 | 66.2 | 35.8 | 60.7 | 32.4 | 82.6 | 54.6 |
| DeepSeek-R1 | 84.3 | 57.5 | 12.4 | 33.0 | 37.3 | 83.9 | 63.1 |
| GLM-4-32B-0414 | 87.6 | 69.6 | 41.5 | 68.7 | 51.2 | 88.1 | 63.8 |
> For `SimpleQA` and `HotpotQA`, we sampled nearly 500 test cases from each test set, provided all models with basic `search` and `click` tools, ensured other settings remained consistent, and averaged the results over 3 runs.
| Model | Framework | [SWE-bench Verified](https://openai.com/index/introducing-swe-bench-verified/) | [SWE-bench Verified mini](https://github.com/mariushobbhahn/SWEBench-verified-mini) |
|---|---|---|---|
| GLM-4-32B-0414 | Moatless<sup>[1]</sup> | 33.8 | 38.0 |
| GLM-4-32B-0414 | Agentless<sup>[2]</sup> | 30.7 | 34.0 |
| GLM-4-32B-0414 | OpenHands<sup>[3]</sup> | 27.2 | 28.0 |
[1] [Moatless v0.0.3](https://github.com/aorwall/moatless-tools) used the following parameters: `response_format="react", thoughts_in_action=False, max_interations=30`. No retries on failed trajectories; other settings are default.
[2] [Agentless v1.5.0](https://github.com/OpenAutoCoder/Agentless) used [BGE](https://github.com/FlagOpen/FlagEmbedding/blob/master/README.md) as the embedding model and [FAISS](https://github.com/facebookresearch/faiss) for similarity search. To speed up patch verification while maintaining performance, the timeout for running a single instance was changed from the default 300s to 180s.
[3] [OpenHands v0.29.1](https://github.com/All-Hands-AI/OpenHands/tree/main) did not use YaRN context extension but limited runs to a maximum of 60 iterations and summarized the history to prevent exceeding the 32K context limit. Summarization was configured as `llm_config="condenser", keep_first=1, max_size=32`. No retries on failed trajectories.
|
gogo5142367/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito | gogo5142367 | 2025-04-27T22:24:20Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mighty striped mosquito",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T10:39:11Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mighty striped mosquito
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gogo5142367/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlfoundations-dev/c1_science_0d_1s_1k | mlfoundations-dev | 2025-04-27T22:24:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:49:17Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_0d_1s_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_0d_1s_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_0d_1s_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
xiwenc1/OpenRS-DR_GRPO_DPP | xiwenc1 | 2025-04-27T22:20:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T08:46:15Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-DR_GRPO_DPP
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for OpenRS-DR_GRPO_DPP
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xiwenc1/OpenRS-DR_GRPO_DPP", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myopen-rs/huggingface/runs/s01rxzpo)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lemon-mint/HyperCLOVA-X-HyperClever-V2-20250428-preview | lemon-mint | 2025-04-27T22:20:17Z | 0 | 1 | null | [
"safetensors",
"llama",
"ko",
"en",
"arxiv:1910.09700",
"base_model:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"base_model:finetune:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"license:other",
"region:us"
] | null | 2025-04-27T12:49:28Z | ---
license: other
license_name: hyperclovax-seed
license_link: >-
https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B/raw/main/LICENSE
base_model:
- naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B
language:
- ko
- en
---
HyperClever-V2 is a model based on NAVER's HyperCLOVAX-SEED model, to which we have added the ability to conduct a systematic thinking process using reasoning tokens.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm2_gen1_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-27T22:17:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T22:17:25Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtusDev/remnant-ep1-mrg | ArtusDev | 2025-04-27T22:17:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T22:08:57Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-32B-0414
## Introduction
The GLM family welcomes new members, the **GLM-4-32B-0414** series models, featuring 32 billion parameters. Its performance is comparable to OpenAIโs GPT series and DeepSeekโs V3/R1 series. It also supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including substantial reasoning-type synthetic data. This lays the foundation for subsequent reinforcement learning extensions. In the post-training stage, we employed human preference alignment for dialogue scenarios. Additionally, using techniques like rejection sampling and reinforcement learning, we enhanced the modelโs performance in instruction following, engineering code, and function calling, thus strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in engineering code, Artifact generation, function calling, search-based Q&A, and report generation. In particular, on several benchmarks, such as code generation or specific Q&A tasks, GLM-4-32B-Base-0414 achieves comparable performance with those larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with deep thinking capabilities. This was developed based on GLM-4-32B-0414 through cold start, extended reinforcement learning, and further training on tasks including mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During training, we also introduced general reinforcement learning based on pairwise ranking feedback, which enhances the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with rumination capabilities (against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model is capable of deeper and longer thinking to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). Z1-Rumination is trained through scaling end-to-end reinforcement learning with responses graded by the ground truth answers or rubrics and can make use of search tools during its deep thinking process to handle complex tasks. The model shows significant improvements in research-style writing and complex tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed all the aforementioned techniques to train a small model (9B). GLM-Z1-9B-0414 exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is top-ranked among all open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Showcase
### Animation Generation
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-Z1-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<video src="https://github.com/user-attachments/assets/849ff9fd-b54d-4c74-9ee5-3412e1a09e32"
style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<video src="https://github.com/user-attachments/assets/8dccdb9d-cc44-4732-b438-74a4e3cb9dfb"
style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Use HTML to simulate the scenario of a small ball released from the center of a rotating hexagon. Consider the collision between the ball and the hexagon's edges, the gravity acting on the ball, and assume all collisions are perfectly elastic. (Prompt translated from Chinese)
</div>
</td>
</tr>
</table>
### Web Design
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/bd9c1fc1-c784-4e8f-9c76-5f7389a715f1"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Design a drawing board that supports custom function plotting, allowing adding and deleting custom functions, and assigning colors to functions. (Prompt translated from Chinese)
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/7ad12d52-9229-4278-8d1b-ffbf43e99070"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Design a UI for a mobile machine learning platform, which should include interfaces for training tasks, storage management, and personal statistics. The personal statistics interface should use charts to display the user's resource usage over a period. Use Tailwind CSS to style the page, and display these 3 mobile interfaces tiled on a single HTML page. (Prompt translated from Chinese) </div>
</td>
</tr>
</table>
### SVG Generation
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/9407e4c1-1876-4ab5-838c-839836fb418a"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Create a misty Jiangnan scene using SVG. (Prompt translated from Chinese)
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/bcce8c5a-cedf-45c8-b666-ddb023d5b49c"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Use SVG to illustrate the training process of an LLM. (Prompt translated from Chinese) </div>
</td>
</tr>
</table>
### Search-Based Writing
For search-based writing tasks, we use the following system prompt to have the model respond based on search results:
```
่ฏทๆ นๆฎๆ็ปๆ็ดข่ฟๅ็ปๆๅฏน็จๆท้ฎ้ข่ฟ่กไฝ็ญใ
## ๆณจๆ
1. ๅ
ๅๅฉ็จๅๆด็ๆถ้ๅฐ็ไฟกๆฏ๏ผ่ไธๆฏ็ฎๅ็ๅคๅถ็ฒ่ดด๏ผ็ๆ็ฌฆๅ็จๆท่ฆๆฑไธๆๆทฑๅบฆ็ไธไธ็ญๆกใ
2. ๆๆไพไฟกๆฏๅ
ๅ็ๆ
ๅตไธ๏ผไฝ ็ๅ็ญ้ๅฐฝๅฏ่ฝๅปถ้ฟ๏ผไป็จๆทๆๅพ่งๅบฆๅบๅ๏ผๆไพๅ
ทๆ่ถณๅคไฟกๆฏ้ๅๅค่งๅบฆ็ๅๅคใ
3. ๅฆๅค๏ผๅนถ้ๆๆ็ๆ็ดข็ปๆ้ฝไธ็จๆท้ฎ้ขๅฏๅ็ธๅ
ณ๏ผ่ฏทไป็ป็็ๅซใ็ญ้ๅๅฉ็จใ
4. ๅฎข่ง็ฑป้ฎ็ญ็็ญๆก้ๅธธ้ๅธธ็ฎ็ญ๏ผไฝ ๅฏไปฅ้ๅฝ่กฅๅ
ไธๅฐไธคๅฅ็ธๅ
ณไฟกๆฏ๏ผไปฅไธฐๅฏๅ
ๅฎนใ
5. ่ฏท็กฎไฟไฝ ็ๅๅคๆ ผๅผ็พ่งใๅฏ่ฏปๆงๅผบใๅฏนไบๅคๅฎไฝๅฏนๆฏๆๅไธพ๏ผๅ็จๅ่กจๆ ผๅผๆฅๅธฎๅฉ็จๆทๆดๅฅฝ็็่งฃไฟกๆฏใ
6. ้ค้็จๆท่ฆๆฑ๏ผๅฆๅไฝ ๅ็ญ็่ฏญ่จ่ฏทไบ็จๆทๆ้ฎ่ฏญ่จไฟๆไธ่ดใ
7. ๅจ้ๅฝๆ
ๅตไธๅจๅฅๅญๆซๅฐพไฝฟ็จไพๅฆ:ใ0โ sourceใ็ๆ ผๅผๅผ็จๆ็ดข็ปๆใ
```
When using, you can obtain search results through methods such as `RAG` or `WebSearch`, and wrap them in `observation`, for example:
```json
[
{
"role": "user",
"content": "Explore the common characteristics of children's literature, with a focus on its narrative techniques and thematic tendencies. This includes narrative techniques: common approaches in children's literature such as first-person, third-person, omniscient narrator, and interactive narration, and their influence on young readers. It also includes thematic tendencies: recurring themes in children's literature such as growth, adventure, friendship, and family, with an analysis of how these themes impact children's cognitive and emotional development. Additionally, other universal features such as the use of personification, repetitive language, symbolism and metaphor, and educational value should be considered. Please provide a detailed analytical report based on academic research, classic examples of children's literature, and expert opinions."
},
{
"role": "observation",
"content": "ใ{id}โ {title}โ {url}ใ\n{content}"
},
...
]
```
For the above prompt, we use an internal or external search model to obtain the search results. Using the format shown above, we can generate the following analysis report:
<div style="height: 400px; width: 100%; overflow: auto; border: 5px solid #ddd; padding: 20px;">
# Analysis Report on Common Characteristics of Children's Literature
Children's literature, as a literary genre specifically created for children, possesses unique artistic features and expressive techniques. This report will comprehensively analyze the common characteristics of children's literature from three dimensions: narrative methods, thematic tendencies, and other universal features, integrating academic research, classic examples of children's literature, and expert viewpoints.
## Narrative Methods and Their Impact on Child Readers
The narrative methods of children's literature exhibit diverse characteristics, including first-person, third-person, narration, and interactive storytelling. These different narrative perspectives and methods profoundly influence children's reading experiences and cognitive development.
**First-person narration** is common in children's literature, unfolding the story from the perspective of a child protagonist. This narrative style bridges the gap between the reader and the story, making it easier for child readers to identify with the characters and develop emotional resonance. For example, many coming-of-age novels use first-person narration, allowing the protagonist to directly recount their experiences and feelings, making readers feel as though they are experiencing the joys and sorrows of growth alongside the protagonist. This narrative perspective lends the work a strong sense of authenticity and intimacy, helping to cultivate children's empathyใ1โ sourceใ.
**Third-person narration** offers a broader perspective, allowing the author to flexibly switch between different characters' viewpoints and present richer layers of the story. In children's literature, third-person omniscient narration enables the author to control the narrative pace, revealing or concealing information as needed to guide children's attention. At the same time, third-person narration facilitates direct dialogue between the author and the reader, conveying values or explaining complex concepts through narration. This narrative method positively influences children's macro-thinking and comprehensive understandingใ1โ sourceใ.
**Narration (authorial intrusion)** is a unique narrative technique in children's literature, where the author directly appears as the "storyteller," explaining the background, commenting on characters, or posing questions to the reader. This technique is particularly common in classic fairy tales, such as the opening lines of *Andersen's Fairy Tales*: "Once, there was a child..." Narration helps children understand the story's context, fills cognitive gaps, and conveys the author's educational intent. Research shows that appropriate authorial intrusion aids children in grasping the story's structure and improving reading comprehensionใ5โ sourceใ.
**Interactive storytelling** is a new trend in contemporary children's literature, especially prominent in the digital media era. Interactive storytelling breaks the traditional unidirectional author-reader relationship, encouraging child readers to participate in the story's creation, such as by choosing plot directions, character dialogues, or endings. This participatory reading enhances children's sense of agency and fosters decision-making skills and creative thinking. For example, some children's reading apps incorporate interactive elements, allowing children to influence the story's development through clicks, drag-and-drop actions, and other operations, thereby gaining a stronger sense of immersion and achievementใ6โ sourceใ. Interactive storytelling transforms children from passive information recipients into active meaning-makers, uniquely contributing to the development of their subjectivity.
*Table: Common Narrative Methods in Children's Literature and Their Effects*
| **Narrative Method** | **Characteristics** | **Impact on Child Readers** | **Classic Examples** |
|----------------------|--------------------|----------------------------|---------------------|
| **First-Person** | Told from the child protagonist's perspective | Enhances immersion, fosters empathy | *Charlotte's Web*, *The Straw House* |
| **Third-Person** | Omniscient or limited perspective | Expands horizons, develops comprehensive understanding | *Harry Potter* series |
| **Narration** | Direct authorial intrusion into the narrative | Aids comprehension, conveys values | *Andersen's Fairy Tales* |
| **Interactive** | Encourages reader participation in creation | Cultivates agency and creative thinking | Children's interactive reading apps |
Notably, the narrative methods of children's literature are often closely intertwined with the **childhood perspective**. The childhood perspective does not necessarily mean the narrator must be a child but refers to the work's ability to describe the world to the greatest extent from a child's heart, expressing their inner psychology and external circumstancesใ2โ sourceใ. Through the childhood perspective, readers can embark on a spiritual journey with a child's mindset, a narrative strategy that creates a strong sense of realism, allowing child readers to achieve emotional identification and cognitive resonance during the reading processใ1โ sourceใ. The use of the childhood perspective gives the work's language a perceptual and naive quality, often with a prose-like and spatial structure, artistic features that align with children's cognitive characteristics and aid their acceptance and understandingใ2โ sourceใ.
## Thematic Tendencies and Their Impact on Children's Cognitive and Emotional Development
The thematic choices in children's literature exhibit distinct tendencies, with common themes including growth, adventure, friendship, and family. These themes not only form the core content of children's literature but also subtly influence children's cognitive development and emotional shaping.
**The theme of growth** is one of the central motifs in children's literature. Growth narratives are regarded as the artistic lifeblood of children's literature, focusing on depicting the pivotal moments of rapid psychological development in children, particularly the awakening and establishment of self-awarenessใ3โ sourceใ. Growth literature typically includes three elements: an artistic portrayal of the self-awareness construction process in growing adolescents, a developmental story with logical propulsion, and the presentation of the protagonist's spiritual trials and quest for directionใ3โ sourceใ. By reading growth-themed works, child readers can indirectly experience the confusion and breakthroughs of growing up and understand the formation of self-identity. Classics such as Astrid Lindgren's *Pippi Longstocking* and Cao Wenxuan's *The Straw House* vividly depict children's psychological growth trajectories in specific environments. Research indicates that growth-themed literary works help children build a positive self-concept and develop the courage and resilience to face challenges, positively contributing to their psychological developmentใ9โ sourceใ.
**The theme of adventure** holds an important place in children's literature, satisfying children's curiosity about exploring the unknown. Adventure stories often feature unusual settings and unknown challenges, with the protagonist growing through overcoming difficulties. Classics like *Robinson Crusoe* and *The Adventures of Tom Sawyer* attract child readers with thrilling plots while conveying the importance of qualities such as courage, wisdom, and perseverance. The impact of adventure themes on children's cognitive development mainly lies in expanding their imaginative space and fostering problem-solving skills. In adventure stories, children must analyze situations, make plans, and respond to unexpected events alongside the protagonist, a process that exercises their logical thinking and adaptabilityใ14โ sourceใ. At the same time, the unfamiliar environments and novel experiences in adventure stories stimulate children's curiosity and desire to learn, laying the foundation for cultivating an exploratory spirit. As experts point out, excellent children's literature should be grounded in reality, rich in depth, and generate significant inspiration andๆๆๅ, guiding children to comprehensively understand the worldใ14โ sourceใ.
**The theme of friendship** is equally prevalent in children's literature, reflecting children's emphasis on peer relationships. Friendship and love are regarded as humanity's most precious qualities, often depicted in children's literature as beacons in the night, guiding children toward the futureใ9โ sourceใ. Friendship stories typically revolve around interactions between children, portraying positive behaviors such as sharing, cooperation, and understanding. Examples include the genuine friendships among the children at Tomoe Gakuen in *Totto-Chan: The Little Girl at the Window* and the promise and mutual aid between Wilbur and Charlotte in *Charlotte's Web*. These stories help child readers recognize the value of friendship and learn how to build and maintain interpersonal relationships. Research shows that children need peer support during their growth, as friends provide crucial emotional anchors, offering the greatest emotional support and comfort in unfamiliar environmentsใ16โ sourceใ. By reading friendship-themed works, children can learn social skills, develop empathy, and cultivate a spirit of cooperation, qualities essential for their social developmentใ17โ sourceใ.
**The theme of family** is an indispensable subject in children's literature, depicting the emotional bonds and interaction patterns among family members. As the primary setting for children's earliest socialization, the family atmosphere and parenting styles profoundly impact children's mental healthใ10โ sourceใ. Family stories in children's literature often focus on parent-child relationships, sibling bonds, and other dynamics, such as Alice's relationship with her sister in *Alice's Adventures in Wonderland* and the Little Prince's interactions with the rose in *The Little Prince*. These stories help children understand the responsibilities and expectations of family roles and learn to handle conflicts within the family. Research indicates that a positive family atmosphere and parental support promote the development of children's positive psychological traits, while adverse family environments and parenting behaviors negatively affect their mental healthใ10โ sourceใใ11โ sourceใ. By reading family-themed works, children can gain emotional support, learn skills for managing family relationships, and establish healthy family values.
*Table: Common Themes in Children's Literature and Their Impact on Child Development*
| **Theme Type** | **Content Representation** | **Impact on Cognitive Development** | **Impact on Emotional Development** | **Classic Examples** |
|---------------|---------------------------|-------------------------------------|-------------------------------------|---------------------|
| **Growth** | Awakening of self-awareness, psychological trials and breakthroughs | Establishes self-concept, fosters problem-solving skills | Shapes positive self-identity, enhances psychological resilience | *The Straw House*, *Pippi Longstocking* |
| **Adventure** | Exploring the unknown, overcoming challenges | Expands imaginative space, exercises logical thinking | Cultivates courage and perseverance | *Robinson Crusoe*, *The Adventures of Tom Sawyer* |
| **Friendship** | Peer interactions, mutual aid and cooperation | Learns social skills, understands interpersonal dynamics | Develops empathy, builds a sense of belonging | *Charlotte's Web*, *Totto-Chan: The Little Girl at the Window* |
| **Family** | Parent-child relationships, sibling bonds | Understands social roles, learns communication skills | Gains emotional support, establishes secure attachments | *Alice's Adventures in Wonderland*, *The Little Prince* |
Regarding thematic choices, children's literature researcher Zhu Ziqiang proposed the famous "Three Major Motifs" theory, categorizing children's literary works into "the motif of love," "the motif of the mischievous child," and "the motif of nature"ใ8โ sourceใ. The motif of love focuses on emotional connections between children and adults or peers; the motif of the mischievous child portrays children's free-spirited nature; and the motif of nature emphasizes the harmonious relationship between children and the natural environment. These three motifs reflect the richness of the children's world from different angles, providing diverse emotional experiences and cognitive frameworks for children. Notably, these themes do not exist in isolation; outstanding works often organically integrate multiple themes. For example, the *Harry Potter* series incorporates growth, friendship, adventure, and family elements, presenting child readers with a multidimensional spiritual world.
## Other Universal Features and Their Artistic Expression
In addition to narrative methods and thematic tendencies, children's literature exhibits a series of universal artistic features, including anthropomorphism, repetitive language, symbolism and metaphor, and educational significance. These features collectively constitute the unique aesthetic style of children's literature, subtly influencing children's cognitive development and aesthetic cultivation.
**Anthropomorphism** is one of the most distinctive artistic features of children's literature. In children's literary works, animals, plants, and even inanimate objects are often endowed with human thoughts, emotions, and behaviors, greatly enhancing the story's fun and imagination. Research shows that anthropomorphism is a frequently used technique by children's literature creators to attribute human characteristics to animals, enabling them to possess perception and communication abilitiesใ19โ sourceใ. Through anthropomorphism, children can more easily understand abstract concepts and moral principles, as anthropomorphic characters translate complex ideas into familiar emotional and behavioral patterns. For example, in scientific fairy tales, anthropomorphic characters can help explain scientific principles, making abstract concepts tangibleใ18โ sourceใ. Anthropomorphism not only enriches the narrative techniques of children's literature but also provides children with a unique perspective for understanding the relationship between humans and nature. It is worth noting that excessive anthropomorphism may affect children's accurate understanding of the animal world, so modern children's literature pays more attention to balancing the natural attributes of characters with human characteristics when employing anthropomorphic techniquesใ19โ sourceใ.
**Repetitive language** is extremely common in children's literature, a linguistic feature rooted in oral traditions originally intended to aid memory and disseminationใ20โ sourceใ. In children's literature, the repetitive use of words, phrases, or sentences serves multiple functions: constructing the story's framework, emphasizing key information, creating rhythm and musicality, and training children's vocabulary skills. For example, in *The Very Hungry Caterpillar*, the author repeatedly uses phrases like "On Monday, he ate one apple. On Tuesday, he ate two pears..." This not only builds the story's structure but also helps children learn numbers and days of the week. Repetitive structures also aid children in developing an awareness of language patterns during the early stages of language acquisition, fostering a sense of language and memory skillsใ21โ sourceใ. Research indicates that repetitive language in children's literature promotes children's language acquisition, helping them master vocabulary and syntactic rules. At the same time, this linguistic feature enhances the story's participatory nature, as children can often join in reciting the repetitive parts, gaining a sense of achievement.
**Symbolism and metaphor** are common expressive techniques in children's literature, conveying abstract meanings through concrete imagery. Symbolism uses specific objects to represent abstract concepts or emotions, while metaphor connects two different things through comparison, creating new meanings. In children's literature, symbolism and metaphor are usually presented in a simple and clear manner, avoiding overly complex interpretations. For example, the character configurations and metaphorical connotations in *The Wizard of Oz* are thought-provoking, as these characters not only breathe life into the story but also convey profound life philosophies through their symbolic meaningsใ24โ sourceใ. Symbolism and metaphor in children's literature are often related to themes such as growth, friendship, and courage, helping children understand abstract concepts through concrete and figurative expressions. Research shows that appropriate metaphors can promote children's cognitive development, stimulating their imagination and creativityใ23โ sourceใ. As children grow older, their ability to understand symbolism and metaphor gradually improves, providing children's literature with multi-layered meaning spaces.
**Educational significance** is an indispensable component of children's literature, which inherently carries the gene of children's educationใ22โ sourceใ. Excellent children's literary works simultaneously possess entertainment and educational functions, not only helping children understand the objective world, enrich their inner emotions, and acquire life wisdom but also cultivating their perception, aesthetic sensibility, thinking skills, and creativityใ15โ sourceใ. Educational significance in children's literature is often not directly presented through preaching but naturally revealed through the storyline and characters' fates. For example, many classic fairy tales convey the importance of qualities such as bravery and honesty through the protagonist's adventurous experiences, while popular science books introduces scientific knowledge through interesting plots and characters. Experts point out that children's literature writers should shoulder the importantence of education, incorporating care for children's mental growth into their worksใ22โ sourceใ. It is worth noting that the educational significance of children's literature should respect children's receptive abilities, avoiding excessive preaching or moral indoctrination, and instead naturally influencing children's values and behaviors through artistic appeal.
**Storytelling** is the most basic and essential feature of children's literature. Children's perceptual, imagery-driven, and novelty-seeking cognitive characteristics and receptive psychology further determine that "storytelling" is an indispensable ontological feature of children's literatureใ25โ sourceใ. Engaging plots are the most crucial aspect of children's literary works because, compared to adults, children's understanding of things relies mainly on intuition, and plots play a key role in guiding children's comprehension of storiesใ26โ sourceใ. The storytelling quality of children's literature is reflected in multiple aspects: clear cause-and-effect relationships, Compact narrative rhythm and satisfying endings. These elements work together to immerse children in the story world, providing emotional satisfaction and cognitive inspiration. As researchers have noted, plots must be performed by specific characters in specific situations to convey individual experiences in unique space-time environmentsใ7โ sourceใ. In children's literature, storytelling is not merely an artistic technique but a bridge connecting children to the world. Through stories, children can safely experience various life scenarios and learn methods for challenges.
In terms of **language features**, children's literature typically adopts a concise, clear, and vivid language style, avoiding complex sentence structures and abstract vocabulary. This linguistic characteristic aligns with children's cognitive development levels, facilitating their understanding and acceptance. At the same time, the language of children's literature is often rich in rhythm and musicality, enhancing readability and memorability through techniques such as rhyming and repetition. For example, Michael Rosen's children's literary works extensively employ repetitive structures and rhymes, a language usage that helps children develop an awareness of language patterns during the early stages of language acquisitionใ21โ sourceใ. The language of children's literature also often includes rich sensory descriptions and emotional expressions, stimulating children's imagination through concrete and tangible imagery. Scholar Jay Davis's research shows that the interactive use of language in children's literature can influence children's language habits and promote their language developmentใ21โ sourceใ.
In summary, these universal features of children's literature collectively constitute its unique artistic charm and educational value. Anthropomorphism and symbolism expand children's imaginative spaces, repetitive language and storytelling promote language acquisition and cognitive development, and the natural integration of educational significance achieves the artistic effect of "teaching through entertainment." These features do not exist in isolation but are interwoven and organically unified, collectively serving the comprehensive development of child readers.
## Conclusion
Through a systematic analysis of the narrative methods, thematic tendencies, and other universal features of children's literature, we can draw the following conclusions: As a special literary genre, the creation and reception of children's literature follow unique rules. In terms of narrative methods, children's literature flexibly employs various techniques such as first-person, third-person, narration, and interactive storytelling to adapt to children's cognitive characteristics and receptive psychology. Among these, the use of the childhood perspective is particularly important, as it enhances the work's sense of realism and intimacy, enabling child readers to develop emotional resonanceใ1โ sourceใใ2โ sourceใ. In terms of thematic choices, growth, adventure, friendship, and family constitute the main content of children's literature. These themes not only satisfy children's curiosity and desire to explore but also subtly influence their cognitive development and emotional shapingใ3โ sourceใใ9โ sourceใ. Other universal features such as anthropomorphism, repetitive language, symbolism, and educational significance collectively form the unique artistic style and educational value of children's literatureใ18โ sourceใใ20โ sourceใใ24โ sourceใ.
These characteristics of children's literature do not exist in isolation but are interconnected and organically unified. For example, adventure themes are often combined with third-person omniscient narration to attract child readers through compact plots and vivid descriptions; friendship themes frequently employ first-person narration to enhance emotional resonance; and anthropomorphism is commonly found in nature-themed works, helping children understand the relationship between humans and nature. These features collectively serve the comprehensive development of child readers, meeting their entertainment needs while promoting their cognitive growth and emotional maturity.
From an academic research perspective, children's literature studies should emphasize the application of narrative theory, as narrative theory focuses more on the "how" of storytellingโnarrative formโwhich aligns closely with the research focus of children's literatureใ0โ sourceใ. At the same time, cognitive research methods provide new perspectives for children's literature studies. By combining cognitive science with literary theory, we can gain a deeper understanding of how children's literature influences children's thinking and cognitive developmentใ4โ sourceใ. Future research should continue to explore the application of these theoretical methods in children's literature studies while paying attention to the intersection and integration of children's literature with emerging fields such as digital media and interdisciplinary education.
From a creative practice perspective, children's literature writers should fully grasp children's cognitive characteristics and emotional needs, incorporating growth Care and educational wisdom into their work As experts have pointed out, excellent children's literary works should be grounded in reality, rich in depth, and generate significant infection and infectivity, guiding children to comprehensively understand the world and correctly recognize themselves and societyใ14โ sourceใ. At the same time, children's literature Creativity should keep pace with the times, addressing new problems and challenges faced by contemporary children, such as media literacy in the digital age and identity formation in multicultural contexts, to provide targeted spiritual nourishment for children.
From an educational application perspective, children's literature should fully leverage its unique role in children's mental growth. Through carefully designed reading activities, teachers and parents can help children deeply understand the themes and meanings in works, guiding them to connect reading experiences with real life. Research shows that children's literature plays an increasingly important role in language education, the construction of a reading society, and children's mental growthใ22โ sourceใ. Therefore, children's literature should be incorporated as an important component of school and family education, promoting children's cognitive development and emotional maturity through activities such as reading sharing, role-playing, and creative writing.
In summary, as a unique art form and educational medium, the common characteristics of children's literature constitute an organic whole, collectively serving the comprehensive development of child readers. By deeply understanding these features and their mechanisms of influence, we can better create, research, and apply children's literature, providing high-quality spiritual nourishment for children's healthy growth. Future children's literature research should continue to deepen theoretical exploration, expand research methods, and strengthen interdisciplinary collaboration to address the ever-changing needs of children and the challenges of the times, promoting the continuous development of children's literature.
</div>
### Function Call
GLM-4-32B-0414 supports calling external tools in JSON format. This can be done via HuggingFace Transformers, vLLM, or sgLang.
The message format for tool calling is as follows:
```json=
{
"role": "asssitant",
"metadata": function_name,
"content": json.dumps(call_arguments, ensure_ascii=False)
}
```
The message format for tool execution results is as follows:
```json=
{
"role": "observation",
"content": json.dumps(tool_response, ensure_ascii=False) if not isinstance(tool_response, str) else tool_response
}
```
The following example demonstrates the process of GLM-4-32B-0414 calling a tool and generating a final response using HuggingFace Transformers.
```python
import json
import re
import ast
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-32B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
def is_function_call(single_message):
"""Determine whether the current system message is a function call."""
pattern = re.compile(r'([^\n`]*?)\n({.*?})(?=\w*\n|$)', re.DOTALL)
matches = pattern.findall(single_message)
if not matches:
return False
func_name, args_str = matches[0]
func_name = func_name.strip()
try:
parsed_args = json.loads(args_str)
except json.JSONDecodeError:
try:
parsed_args = ast.literal_eval(args_str)
except:
return False
return {"name": func_name, "arguments": parsed_args}
def realtime_aqi(city):
"""Weather Query Tool"""
if 'ๅไบฌ' in city.lower():
return json.dumps({'city': 'ๅไบฌ', 'aqi': '10', 'unit': 'celsius'}, ensure_ascii=False)
elif 'ไธๆตท' in city.lower():
return json.dumps({'city': 'ไธๆตท', 'aqi': '72', 'unit': 'fahrenheit'}, ensure_ascii=False)
else:
return json.dumps({'city': city, 'aqi': 'unknown'}, ensure_ascii=False)
def build_system_prompt(tools):
"""Construct system prompt based on the list of available tools."""
if tools is None:
tools = []
value = "# ๅฏ็จๅทฅๅ
ท"
contents = []
for tool in tools:
content = f"\n\n## {tool['function']['name']}\n\n{json.dumps(tool['function'], ensure_ascii=False, indent=4)}"
content += "\nๅจ่ฐ็จไธ่ฟฐๅฝๆฐๆถ๏ผ่ฏทไฝฟ็จ Json ๆ ผๅผ่กจ็คบ่ฐ็จ็ๅๆฐใ"
contents.append(content)
value += "".join(contents)
return value
tools = [
{
"type": "function",
"function": {
"name": "realtime_aqi",
"description": "ๅคฉๆฐ้ขๆฅใ่ทๅๅฎๆถ็ฉบๆฐ่ดจ้ใๅฝๅ็ฉบๆฐ่ดจ้๏ผPM2.5๏ผPM10ไฟกๆฏ",
"parameters": {
"type": "object",
"properties": {
"city": {
"description": "ๅๅธๅ"
}
},
"required": [
"city"
]
}
}
}
]
system_prompt = build_system_prompt(tools)
message = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "ๅไบฌๅไธๆตทไปๅคฉ็ๅคฉๆฐๆ
ๅต"}
]
print(f"User Message: {message[-1]['content']}")
while True:
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 1024,
"do_sample": True,
}
out = model.generate(**generate_kwargs)
generate_resp = tokenizer.decode(out[0][inputs["input_ids"].shape[1]:-1], skip_special_tokens=False)
stop_sequence = tokenizer.decode(out[0][-1:], skip_speical_tokens=False)
if stop_sequence == "<|user|>":
print(f"Assistant Response: {generate_resp.strip()}")
break
function_calls = []
for m in generate_resp.split("<|assistant|>"):
fc_decode = is_function_call(m.strip())
if fc_decode:
message.append({"role": "assistant", "metadata": fc_decode['name'], "content": json.dumps(fc_decode['arguments'], ensure_ascii=False)})
print(f"Function Call: {fc_decode}")
function_calls.append(fc_decode)
else:
message.append({"role": "assistant", "content": m})
print(f"Assistant Response: {m.strip()}")
for fc in function_calls:
function_response = realtime_aqi(
city=fc["arguments"]["city"],
)
print(f"Function Response: {function_response}")
message.append({"role": "observation", "content": function_response})
```
## Evaluation Results
<div style="text-align: center;">
<img src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-32B.png" style="width: 80%;" />
</div>
### GLM-4-0414 Series
| ๆจกๅ | IFEval | BFCL-v3 (Overall) | BFCL-v3 (MultiTurn) | TAU-Bench (Retail) | TAU-Bench (Airline) | SimpleQA | HotpotQA |
| ---------------- | ------ | ----------------- | ------------------- | ------------------ | ------------------- | -------- | -------- |
| Qwen2.5-Max | 85.6 | 50.9 | 30.5 | 58.3 | 22.0 | 79.0 | 52.8 |
| GPT-4o-1120 | 81.9 | 69.6 | 41.0 | 62.8 | 46.0 | 82.8 | 63.9 |
| DeepSeek-V3-0324 | 83.4 | 66.2 | 35.8 | 60.7 | 32.4 | 82.6 | 54.6 |
| DeepSeek-R1 | 84.3 | 57.5 | 12.4 | 33.0 | 37.3 | 83.9 | 63.1 |
| GLM-4-32B-0414 | 87.6 | 69.6 | 41.5 | 68.7 | 51.2 | 88.1 | 63.8 |
> For `SimpleQA` and `HotpotQA`, we sampled nearly 500 test cases from each test set, provided all models with basic `search` and `click` tools, ensured other settings remained consistent, and averaged the results over 3 runs.
| Model | Framework | [SWE-bench Verified](https://openai.com/index/introducing-swe-bench-verified/) | [SWE-bench Verified mini](https://github.com/mariushobbhahn/SWEBench-verified-mini) |
|---|---|---|---|
| GLM-4-32B-0414 | Moatless<sup>[1]</sup> | 33.8 | 38.0 |
| GLM-4-32B-0414 | Agentless<sup>[2]</sup> | 30.7 | 34.0 |
| GLM-4-32B-0414 | OpenHands<sup>[3]</sup> | 27.2 | 28.0 |
[1] [Moatless v0.0.3](https://github.com/aorwall/moatless-tools) used the following parameters: `response_format="react", thoughts_in_action=False, max_interations=30`. No retries on failed trajectories; other settings are default.
[2] [Agentless v1.5.0](https://github.com/OpenAutoCoder/Agentless) used [BGE](https://github.com/FlagOpen/FlagEmbedding/blob/master/README.md) as the embedding model and [FAISS](https://github.com/facebookresearch/faiss) for similarity search. To speed up patch verification while maintaining performance, the timeout for running a single instance was changed from the default 300s to 180s.
[3] [OpenHands v0.29.1](https://github.com/All-Hands-AI/OpenHands/tree/main) did not use YaRN context extension but limited runs to a maximum of 60 iterations and summarized the history to prevent exceeding the 32K context limit. Summarization was configured as `llm_config="condenser", keep_first=1, max_size=32`. No retries on failed trajectories.
|
Volko76/Qwen2.5-0.5B-Instruct-4.5bpw-exl2 | Volko76 | 2025-04-27T22:16:33Z | 0 | 0 | transformers | [
"transformers",
"chat",
"autoquant",
"exl2",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-03T20:04:34Z | ---
base_model: Qwen/Qwen2.5-0.5B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- autoquant
- exl2
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
MoyYuan/Varifocal-Generation-Metadata | MoyYuan | 2025-04-27T22:09:13Z | 0 | 0 | null | [
"pytorch",
"bart",
"en",
"dataset:MoyYuan/Varifocal-Generation",
"license:mit",
"region:us"
] | null | 2025-04-27T21:26:22Z | ---
license: mit
datasets:
- MoyYuan/Varifocal-Generation
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/Varifocal for README information. |
MoyYuan/Varifocal-Reranking-Answer | MoyYuan | 2025-04-27T22:08:57Z | 0 | 0 | null | [
"pytorch",
"bert",
"en",
"dataset:MoyYuan/Varifocal-Reranking",
"license:mit",
"region:us"
] | null | 2025-04-27T21:50:04Z | ---
license: mit
datasets:
- MoyYuan/Varifocal-Reranking
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/Varifocal for README information. |
borisloktev/qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset | borisloktev | 2025-04-27T22:08:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:borisloktev/qwen2_5-3B-fb16-extraction-new-schema",
"base_model:finetune:borisloktev/qwen2_5-3B-fb16-extraction-new-schema",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T21:05:16Z | ---
base_model: borisloktev/qwen2_5-3B-fb16-extraction-new-schema
library_name: transformers
model_name: qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset
This model is a fine-tuned version of [borisloktev/qwen2_5-3B-fb16-extraction-new-schema](https://huggingface.co/borisloktev/qwen2_5-3B-fb16-extraction-new-schema).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="borisloktev/qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/5-plus/qwen2_5-3B-pinpointing/runs/e3bi9nd9)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
onnx-community/Qwen2.5-1.5B | onnx-community | 2025-04-27T21:59:06Z | 33 | 3 | transformers.js | [
"transformers.js",
"onnx",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:quantized:Qwen/Qwen2.5-1.5B",
"region:us"
] | text-generation | 2024-09-23T06:21:16Z | ---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers.js
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
https://huggingface.co/Qwen/Qwen2.5-1.5B with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [๐ค Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
onnx-community/Qwen2.5-0.5B | onnx-community | 2025-04-27T21:58:58Z | 111 | 0 | transformers.js | [
"transformers.js",
"onnx",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"region:us"
] | text-generation | 2024-09-23T06:17:19Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers.js
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
https://huggingface.co/Qwen/Qwen2.5-0.5B with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [๐ค Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
chchen/MentaLLaMA-chat-7B-PsyCourse-fold6 | chchen | 2025-04-27T21:58:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:klyang/MentaLLaMA-chat-7B-hf",
"base_model:adapter:klyang/MentaLLaMA-chat-7B-hf",
"license:mit",
"region:us"
] | null | 2025-04-27T09:11:12Z | ---
library_name: peft
license: mit
base_model: klyang/MentaLLaMA-chat-7B-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: MentaLLaMA-chat-7B-PsyCourse-fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentaLLaMA-chat-7B-PsyCourse-fold6
This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-train-fold6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8292 | 0.0751 | 50 | 0.6473 |
| 0.1595 | 0.1502 | 100 | 0.1169 |
| 0.0933 | 0.2254 | 150 | 0.0727 |
| 0.0512 | 0.3005 | 200 | 0.0581 |
| 0.0619 | 0.3756 | 250 | 0.0474 |
| 0.0395 | 0.4507 | 300 | 0.0460 |
| 0.0476 | 0.5258 | 350 | 0.0454 |
| 0.0444 | 0.6009 | 400 | 0.0407 |
| 0.0543 | 0.6761 | 450 | 0.0425 |
| 0.0454 | 0.7512 | 500 | 0.0372 |
| 0.0562 | 0.8263 | 550 | 0.0377 |
| 0.0336 | 0.9014 | 600 | 0.0361 |
| 0.0494 | 0.9765 | 650 | 0.0368 |
| 0.0354 | 1.0516 | 700 | 0.0386 |
| 0.029 | 1.1268 | 750 | 0.0376 |
| 0.0301 | 1.2019 | 800 | 0.0352 |
| 0.0321 | 1.2770 | 850 | 0.0341 |
| 0.0271 | 1.3521 | 900 | 0.0343 |
| 0.0351 | 1.4272 | 950 | 0.0330 |
| 0.0244 | 1.5023 | 1000 | 0.0330 |
| 0.0277 | 1.5775 | 1050 | 0.0341 |
| 0.0231 | 1.6526 | 1100 | 0.0340 |
| 0.0261 | 1.7277 | 1150 | 0.0327 |
| 0.0297 | 1.8028 | 1200 | 0.0348 |
| 0.027 | 1.8779 | 1250 | 0.0334 |
| 0.0417 | 1.9531 | 1300 | 0.0348 |
| 0.0173 | 2.0282 | 1350 | 0.0328 |
| 0.0207 | 2.1033 | 1400 | 0.0323 |
| 0.0223 | 2.1784 | 1450 | 0.0325 |
| 0.0107 | 2.2535 | 1500 | 0.0359 |
| 0.0182 | 2.3286 | 1550 | 0.0332 |
| 0.0187 | 2.4038 | 1600 | 0.0323 |
| 0.018 | 2.4789 | 1650 | 0.0327 |
| 0.0205 | 2.5540 | 1700 | 0.0350 |
| 0.0182 | 2.6291 | 1750 | 0.0323 |
| 0.0202 | 2.7042 | 1800 | 0.0325 |
| 0.0218 | 2.7793 | 1850 | 0.0323 |
| 0.0179 | 2.8545 | 1900 | 0.0319 |
| 0.0213 | 2.9296 | 1950 | 0.0330 |
| 0.0104 | 3.0047 | 2000 | 0.0328 |
| 0.0097 | 3.0798 | 2050 | 0.0359 |
| 0.0103 | 3.1549 | 2100 | 0.0363 |
| 0.0131 | 3.2300 | 2150 | 0.0359 |
| 0.0149 | 3.3052 | 2200 | 0.0362 |
| 0.0083 | 3.3803 | 2250 | 0.0365 |
| 0.0115 | 3.4554 | 2300 | 0.0359 |
| 0.0111 | 3.5305 | 2350 | 0.0387 |
| 0.0094 | 3.6056 | 2400 | 0.0376 |
| 0.0051 | 3.6808 | 2450 | 0.0376 |
| 0.0053 | 3.7559 | 2500 | 0.0375 |
| 0.0078 | 3.8310 | 2550 | 0.0377 |
| 0.0105 | 3.9061 | 2600 | 0.0372 |
| 0.0105 | 3.9812 | 2650 | 0.0371 |
| 0.0064 | 4.0563 | 2700 | 0.0382 |
| 0.0048 | 4.1315 | 2750 | 0.0398 |
| 0.0065 | 4.2066 | 2800 | 0.0407 |
| 0.0031 | 4.2817 | 2850 | 0.0417 |
| 0.0028 | 4.3568 | 2900 | 0.0420 |
| 0.0043 | 4.4319 | 2950 | 0.0421 |
| 0.0048 | 4.5070 | 3000 | 0.0424 |
| 0.0038 | 4.5822 | 3050 | 0.0428 |
| 0.0041 | 4.6573 | 3100 | 0.0430 |
| 0.0066 | 4.7324 | 3150 | 0.0431 |
| 0.003 | 4.8075 | 3200 | 0.0430 |
| 0.0031 | 4.8826 | 3250 | 0.0429 |
| 0.0046 | 4.9577 | 3300 | 0.0430 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
aleegis/b0d4847d-9557-4017-acc7-9bcf92540a90 | aleegis | 2025-04-27T21:56:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2025-04-27T20:36:27Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b0d4847d-9557-4017-acc7-9bcf92540a90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- e3bdf1006153b40c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3bdf1006153b40c_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/b0d4847d-9557-4017-acc7-9bcf92540a90
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3bdf1006153b40c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 3d1ee2bb-73b6-4b79-854d-d4bb37d1c5c4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d1ee2bb-73b6-4b79-854d-d4bb37d1c5c4
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# b0d4847d-9557-4017-acc7-9bcf92540a90
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
unsloth/QwQ-32B-GGUF | unsloth | 2025-04-27T21:54:44Z | 13,908 | 77 | null | [
"gguf",
"qwen2",
"chat",
"qwen",
"text-generation",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-03-05T22:23:39Z | ---
base_model: Qwen/QwQ-32B
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
- qwen
---
> [!NOTE]
> To fix endless generations + instructions on how to run QwQ-32B, read our [Tutorial here](https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-effectively).
>
<div>
<p style="margin-top: 0;">
<strong><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic v2.0</a> achieves superior accuracy & outperforms other leading quant methods.</strong>
</p>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>Qwen-QwQ-32B with our bug fixes. <br> See <a href="https://huggingface.co/collections/unsloth/qwen-qwq-32b-collection-676b3b29c20c09a8c71a6235">our collection</a> for versions of QwQ-32B including GGUF & 4-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's QwQ-32B <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-effectively">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">Finetune your own Reasoning model like R1 with Unsloth!</h2>
</div>
For Ollama: `ollama run hf.co/unsloth/QwQ-32B-GGUF:Q4_K_M`
For llama.cpp:
```python
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/QwQ-32B-GGUF",
local_dir = "unsloth-QwQ-32B-GGUF",
allow_patterns = ["*Q4_K_M*"], # For Q4_K_M
)
```
```bash
./llama.cpp/llama-cli \
--model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf \
--threads 32 \
--ctx-size 16384 \
--n-gpu-layers 99 \
--seed 3407 \
--prio 2 \
--temp 0.6 \
--repeat-penalty 1.1 \
--dry-multiplier 0.5 \
--min-p 0.01 \
--top-k 40 \
--top-p 0.95 \
-no-cnv \
--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc" \
--prompt "<|im_start|>user\nCreate a Flappy Bird game in Python."
```
See https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-without-bugs for more details!
> [!NOTE]
> To stop infinite generations - add `--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"`
>
# โจ Finetune for Free
We have a free Google Colab notebook for turning Qwen2.5 (3B) into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **GRPO with Phi-4** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-4 (14B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# QwQ-32B
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
<p align="center">
<img width="100%" src="figures/benchmark.jpg">
</p>
**This repo contains the QwQ 32B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
**Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Usage Guidelines
To achieve optimal performance, we recommend the following settings:
1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
2. **Sampling Parameters**:
- Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions.
- Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
4. **Handle Long Inputs**: For inputs exceeding 32,768 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwq-32b/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq32b,
title = {QwQ-32B: The Power of Scaling RL},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
ZycckZ/VieModTest | ZycckZ | 2025-04-27T21:52:53Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:KoichiYasuoka/bert-base-vietnamese-ud-goeswith",
"base_model:finetune:KoichiYasuoka/bert-base-vietnamese-ud-goeswith",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-04-18T15:00:35Z | ---
library_name: transformers
license: cc-by-sa-4.0
base_model: KoichiYasuoka/bert-base-vietnamese-ud-goeswith
dataset: taidng/UIT-ViQuAD2.0
tags:
- generated_from_trainer
model-index:
- name: VieModTest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VieModTest
This model is a fine-tuned version of [KoichiYasuoka/bert-base-vietnamese-ud-goeswith](https://huggingface.co/KoichiYasuoka/bert-base-vietnamese-ud-goeswith) model and [taidng/UIT-ViQuAD2](https://huggingface.co/datasets/taidng/UIT-ViQuAD2.0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6558
- Exact Match (EM): 50.28
- F1 Score: 70.75
## Model description
- This model maybe not too bad and not is the best ๐ maybe ๐
- Trained total 115,354,368 parameters
## Intended uses & limitations
- Create a simple chatbot ๐ค
- Maybe just understand Vietnamese
## Training and evaluation data
- taidng/UIT-ViQuAD2.0 - "train"
- taidng/UIT-ViQuAD2.0 - "validation"
## Training procedure
Based on Question Answering HuggingFace ๐ค
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8987 | 1.0 | 1423 | 1.6217 |
| 1.3346 | 2.0 | 2846 | 1.5800 |
| 1.0633 | 3.0 | 4269 | 1.6558 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
THP2903/Qwen2-VL-2B-Instruct_impression_v2 | THP2903 | 2025-04-27T21:52:15Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-17T01:07:12Z | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: Qwen2-VL-2B-Instruct_impression_v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2-VL-2B-Instruct_impression_v2
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="THP2903/Qwen2-VL-2B-Instruct_impression_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phucth290303-pythera/Qwen2-VL-2B-Instruct_impression/runs/cl6klszk)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mechai-copilot/qwen2.5-3B-instruct-apply-loras | mechai-copilot | 2025-04-27T21:50:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T21:50:47Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mechai-copilot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Samiya-Hijab-Leaked-Video-18/Samiya.Hijab.Viral.Video.Link.official | Samiya-Hijab-Leaked-Video-18 | 2025-04-27T21:47:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T21:46:31Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/24tm3bsa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Samiya Hijab Viral Video Trending: watch, Full Story, Facts & Public Reaction
Table of content
Discover the real story behind the Samiya Hijab viral video that's trending across social media. What happened, why it's viral, and public response โ all here.
The Samiya Hijab viral video has captured widespread attention online, creating waves on platforms like TikTok, Instagram, and Twitter. In this post, we will explore what the video is about, why it became viral, and how it reflects social trends and public sentiments. This post follows Blogger, AdSense, and SEO guidelines and contains no explicit content. It's focused on information, awareness, and responsible reporting while keeping our audience updated with accurate details. |
nyuuzyou/Qwen2.5-0.5B-Bluesky | nyuuzyou | 2025-04-27T21:43:47Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"bluesky",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:alpindale/two-million-bluesky-posts",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-29T10:04:42Z | ---
base_model: Qwen/Qwen2.5-0.5B
tags:
- bluesky
datasets:
- alpindale/two-million-bluesky-posts
license: other
pipeline_tag: text-generation
library_name: transformers
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-0.5B-Bluesky
This model is a fine-tuned version of the [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) language model on the [alpindale/two-million-bluesky-posts](https://huggingface.co/datasets/alpindale/two-million-bluesky-posts) dataset.
**License**: Dataset usage is subject to Bluesky's Terms of Service. |
xiaomama2002/qwen2_5_vl-7b-sft | xiaomama2002 | 2025-04-27T21:42:50Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-21T09:05:01Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the gllava_qa_tuning dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 24
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.51.0
- Pytorch 2.4.0+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/DarkHazard-v1.1-24b-GGUF | mradermacher | 2025-04-27T21:41:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:FlareRebellion/DarkHazard-v1.1-24b",
"base_model:quantized:FlareRebellion/DarkHazard-v1.1-24b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T14:53:27Z | ---
base_model: FlareRebellion/DarkHazard-v1.1-24b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FlareRebellion/DarkHazard-v1.1-24b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF/resolve/main/DarkHazard-v1.1-24b.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Samiya-Hijab-Viral-Dance-Video/18.EXCLUSIVE.Samiya.Hijab.Viral.Video.Original.Leaked.Full | Samiya-Hijab-Viral-Dance-Video | 2025-04-27T21:38:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T21:38:14Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/24tm3bsa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Samiya Hijab Viral Video Trending: watch, Full Story, Facts & Public Reaction
Table of content
Discover the real story behind the Samiya Hijab viral video that's trending across social media. What happened, why it's viral, and public response โ all here.
The Samiya Hijab viral video has captured widespread attention online, creating waves on platforms like TikTok, Instagram, and Twitter. In this post, we will explore what the video is about, why it became viral, and how it reflects social trends and public sentiments. This post follows Blogger, AdSense, and SEO guidelines and contains no explicit content. It's focused on information, awareness, and responsible reporting while keeping our audience updated with accurate details. |
mlfoundations-dev/c1_math_nod_4s_10k | mlfoundations-dev | 2025-04-27T21:37:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:35:11Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_math_nod_4s_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_math_nod_4s_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_math_nod_4s_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0a0+b465a5843b.nv24.09
- Datasets 3.5.0
- Tokenizers 0.20.3
|
ahmedch28/mistral_7b_finetuned_pr_v4 | ahmedch28 | 2025-04-27T21:34:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T21:34:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jcorblaz/ppo-Huggy | jcorblaz | 2025-04-27T21:32:54Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-04-27T21:32:44Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jcorblaz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
merterbak/Mistral-Small-3.1-24B-Instruct-2503-GGUF | merterbak | 2025-04-27T21:24:39Z | 187 | 1 | null | [
"gguf",
"mistral",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-26T21:12:21Z | ---
license: apache-2.0
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
pipeline_tag: text-generation
tags:
- mistral
---
# Mistral-Small-3.1-24B-Instruct-2503
Mistral-Small-3.1-24B-Instruct-2503 model available in multiple GGUF quantization formats, also stored on Xet for fast and efficient access.
Available in following quantization levels:
| Quantization Level | Bit Precision | Quantization Type |
|--------------------|---------------|-------------------------------|
| Q2_K | 2-bit | K-means clustering |
| Q3_K_S | 3-bit | Small k-means clustering |
| Q3_K_M | 3-bit | Medium k-means clustering |
| Q3_K_L | 3-bit | Large k-means clustering |
| Q4_0 | 4-bit | Standard quantization |
| Q4_1 | 4-bit | Standard quantization |
| Q4_K_S | 4-bit | Small k-means clustering |
| Q4_K_M | 4-bit | Medium k-means clustering |
| Q5_0 | 5-bit | Standard quantization |
| Q5_1 | 5-bit | Standard quantization |
| Q5_K_S | 5-bit | Small k-means clustering |
| Q5_K_M | 5-bit | Medium k-means clustering |
| Q6_K | 6-bit | K-means clustering |
| Q8_0 | 8-bit | Standard quantization | |
Tikadilko/Kolodotvor | Tikadilko | 2025-04-27T21:22:25Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"biology",
"chemistry",
"dataset:nvidia/OpenCodeReasoning",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:adapter:deepseek-ai/DeepSeek-V3-0324",
"license:bsd",
"region:us"
] | null | 2025-04-27T21:18:12Z | ---
license: bsd
datasets:
- nvidia/OpenCodeReasoning
metrics:
- bertscore
base_model:
- deepseek-ai/DeepSeek-V3-0324
new_version: deepseek-ai/DeepSeek-V3-0324
library_name: adapter-transformers
tags:
- biology
- chemistry
--- |
annaminko/myfirstmodel | annaminko | 2025-04-27T21:19:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T18:15:14Z | This is My first Huggin Face File
---
license: mit
---
|
mansurealism/llm-course-hw1 | mansurealism | 2025-04-27T21:16:49Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-03-15T13:02:20Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
rdoshi21/detr-finetuned-franka | rdoshi21 | 2025-04-27T21:04:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-04-27T21:04:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
3mily1u/new-codegen-350m-mono-dpoed-control-50-0.5 | 3mily1u | 2025-04-27T20:57:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:56:17Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vaibhav1411/gpt2_medium_finetuned_fake_news | vaibhav1411 | 2025-04-27T20:54:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T12:01:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Luau | Vo1dAbyss | 2025-04-27T20:51:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T20:22:10Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Chat Template
```
You are a Luau programming assistant. Your primary goal is to provide clear, accurate, and logically structured responses based on the user's input or instruction. The user may ask for code implementations, explanations of Luau concepts, or help debugging issues.
Every response must be divided into two sections:
- Reasoning: Carefully analyze the user's request. Identify the intent, break down the problem into parts, and outline a logical plan to address it. This section is your internal thinking โ use it to clarify your direction before writing code. Mention any assumptions you make if the user input is vague.
- Explanation: Present and explain the code youโve written in detail. Describe what each part does, how it contributes to the solution, and why it was written that way. Use beginner-friendly language where possible, and include comments in code if needed.
If the user's question is ambiguous or incomplete, try to infer the most likely meaning, but mention your assumptions in the Reasoning section.
If multiple solutions exist, explain the trade-offs and why you're choosing one approach.
If the problem is simple, keep the explanation short but still clear.
If the topic is advanced, break it down clearly and explain carefully.
Always prioritize clarity over showing off complex solutions.
Important Rules:
- Always use correct Luau syntax.
- Keep explanations clear, step-by-step, and easy to understand.
- Before showing code, think carefully and check for any mistakes.
- Prefer clean and readable code.
Begin with the simplest, most straightforward solution. If that approach doesn't resolve the issue, progressively explore more advanced alternatives, while always keeping the focus on finding an effective solution.
### Instruction:
{}
### Response:
{}
```
# Uploaded model
- **Developed by:** Vo1dAbyss
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/flour_picking2-e4hh41rlv2 | phospho-app | 2025-04-27T20:44:45Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-04-27T20:43:19Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 205, in predict
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 644, in get_video
trajectory_index = self.get_trajectory_index(trajectory_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index
raise ValueError(
ValueError: Error finding trajectory index for 4, found trajectory_indices=array([4, 5])
0%| | 0/180 [00:03<?, ?it/s]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 207, in predict
raise RuntimeError(e)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 644, in get_video
trajectory_index = self.get_trajectory_index(trajectory_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index
raise ValueError(
ValueError: Error finding trajectory index for 4, found trajectory_indices=array([4, 5])
0%| | 0/180 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [arturaah/flour_picking2](https://huggingface.co/datasets/arturaah/flour_picking2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 506
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
Alessio-Borgi/all-mpnet-base-v2-margin-based-triplet-loss-finetuned-culture-10-epochs-enhanced | Alessio-Borgi | 2025-04-27T20:38:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6551",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-27T20:38:31Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6551
- loss:TripletLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
- source_sentence: 'Vechornytsia Slavic traditional social gathering Vechornytsi (Ukrainian:
ะฒะตัะพัะฝะธัั, from ะฒะตััั "evening") are Ukrainian traditional gatherings with music,
songs, jokes and rituals. Vechornytsi traditionally began in late September, after
the seasonal agricultural work was over. Young people from villages gathered in
the evenings for entertainment. There were everyday and festive vechornytsi. During
everyday parties, people created folk art objects like rushnyky while entertaining
themselves by singing songs or telling jokes. During festive vechornytsi, rich
dinners were cooked, and there was music and dancing. It was the ladies'' responsibility
to cook dinner and the men''s responsibility to provide everybody with music,
drinks, and sweets. Vechornytsi were mainly for younger people. Here they not
only entertained each other during long winter evenings but also met new people,
communicated more closely, and found partners. Each street in a village had at
least one house for vechornytsi. Even small remote farms could have a few houses
for parties because it was a custom that brothers and sisters could not attend
the same vechornytsi. {''aliases'': [''Vechornytsi'']} {''instance of'': ''ritual'',
''subclass of'': ''ritual'', ''described by source'': ''Brockhaus and Efron Encyclopedic
Dictionary'', ''country'': ''India''}'
sentences:
- 'Rudolph Goclenius German philosopher (1547-1628) Rudolph Goclenius the Elder
(Latin: Rudolphus Goclenius; born Rudolf Gockel or Gรถckel; 1 March 1547 โ 8 June
1628) was a German scholastic philosopher. He is sometimes credited with coining
the term psychology in 1590, though the term had been used by Pier Nicola Castellani
and Gerhard Synellius 65 years earlier. {''aliases'': [''Rudolph Goclenius the
Elder'']} {''instance of'': ''human'', ''occupation'': ''philosopher'', ''sex
or gender'': ''male'', ''languages spoken, written or signed'': ''German'', ''country
of citizenship'': ''Germany'', ''described by source'': ''Allgemeine Deutsche
Biographie''}'
- 'Fasole cu cรขrnaลฃi Romanian dish Fasole cu cรขrnaศi ("beans with sausages", Romanian
pronunciation: [faหsole ku kษจrหnatอกsสฒ]) is a popular Romanian dish, consisting
of baked beans and sausages. A variation replaces the sausages with afumฤturฤ
(smoked meat). Also a traditional Army dish, fasole cu cรขrnaศi is prepared by
Army cooks and served freely to the crowds during the National Day celebrations
(on 1 December) in Bucharest and Alba Iulia. The main ingredients for this dish
are: beans, smoked pork, carrots, onions, tomatoes, parsnip, tomato sauce and
bay leaf. {''name'': ''Fasole cu cรขrnaศi'', ''caption'': ''Fasole cu cรขrnaศi'',
''country'': ''Romania 25px|link=Cuisine of Romania'', ''course'': ''Main course'',
''served'': ''Hot'', ''main_ingredient'': ''Sausages, beans, tomato sauce, tomatoes,
carrots, onions, parsnip, bay leaves''} {''subclass of'': ''food'', ''instance
of'': ''food'', ''has part(s)'': ''flour'', ''maintained by WikiProject'': ''WikiProject
Intangible Cultural Heritage'', ''course'': ''main course''}'
- 'bicycle pedal-driven two-wheel vehicle A bicycle, also called a pedal cycle,
bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven,
single-track vehicle, with two wheels attached to a frame, one behind the other.
A bicycle rider is called a cyclist, or bicyclist. Bicycles were introduced in
the 19th century in Europe. By the early 21st century there were more than 1 billion
bicycles. There are many more bicycles than cars. Bicycles are the principal means
of transport in many regions. They also provide a popular form of recreation,
and have been adapted for use as children''s toys. Bicycles are used for fitness,
military and police applications, courier services, bicycle racing, and artistic
cycling. The basic shape and configuration of a typical upright or "safety" bicycle,
has changed little since the first chain-driven model was developed around 1885.
However, many details have been improved, especially since the advent of modern
materials and computer-aided design. These have allowed for a proliferation of
specialized designs for many types of cycling. In the 21st century, electric bicycles
have become popular. The bicycle''s invention has had an enormous effect on society,
both in terms of culture and of advancing modern industrial methods. Several components
that played a key role in the development of the automobile were initially invented
for use in the bicycle, including ball bearings, pneumatic tires, chain-driven
sprockets, and tension-spoked wheels. {''aliases'': [''bike'', ''pushbike'', ''pedal
bike'', ''pedal cycle'', ''cycle'', ''๐ฒ'', ''Bicycles''], ''application'': '':Transportation'',
''caption'': ''The most popular bicycle modelโand most popular vehicle of any
kind in the worldโis the Chinese Flying Pigeon, with about 500 million produced.'',
''classification'': ''Vehicle'', ''components'': ''Frame, wheels, tires, saddle,
handlebar, pedals, drivetrain'', ''free_label'': ''Types'', ''free_text'': ''Utility
bicycle, mountain bicycle, racing bicycle, touring bicycle, hybrid bicycle, cruiser
bicycle, BMX bike, tandem, low rider, tall bike, fixed gear, folding bicycle,
amphibious cycle, cargo bike, recumbent, electric bicycle'', ''fuel_source'':
''Human-power (and/or motor-power)'', ''image_upright'': ''1.35'', ''invented'':
''19th century'', ''inventor'': ''Karl von Drais, Kirkpatrick MacMillan'', ''name'':
''Bicycle'', ''wheels'': ''2''} {''instance of'': ''mode of transport'', ''on
focus list of Wikimedia project'': ''Wikipedia:List of articles all languages
should have'', ''described by source'': ''Metropolitan Museum of Art Tagging Vocabulary''}'
- source_sentence: 'Slovak Figure Skating Championships recurring figure skating competition
The Slovak Figure Skating Championships (Slovak: Majstrovstvรก Slovenska v krasokorฤuฤพovanรญ)
are an annual figure skating competition organized by he Slovak Figure Skating
Association (Slovak: Slovensky Krasokorฤuliarsky Zvรคz) to crown the national champions
of Slovakia. The senior-level championships are held in coordination with the
skating federations of Hungary, Poland, and Slovakia as part of the Four Nationals
Figure Skating Championships. The first Slovak Championships held after the dissolution
of Czechoslovakia took place in 1993. The Czech Skating Association and the Slovak
Figure Skating Association joined their national championships in 2007. Poland
joined in 2009 and Hungary in 2014. Senior-level skaters from the four nations
compete at the Four Nationals Championships, and then the results are split to
form national podiums for each nation. Medals are awarded in men''s singles, women''s
singles, pair skating, and ice dance at the senior level, and in pair skating
and ice dance at the junior level, although not every discipline is held every
year due to a lack of participants. Junior-level singles skaters and novice-level
skaters in all disciplines compete in a separate competition that is exclusive
to Slovakia. {''name'': ''Slovak Figure Skating Championships'', ''logo'': ''Slovak
Figure Skating Association Logo.jpg'', ''logo_alt'': ''Logo of the Slovak Figure
Skating Association'', ''status'': ''Active'', ''genre'': ''National championships'',
''frequency'': ''Annual'', ''country'': ''{{SVK}}'', ''first'': ''1994'', ''prev'':
''2025 Four Nationals Championships'', ''organised'': ''Slovak Figure Skating
Association''} {''instance of'': ''recurring sporting event'', ''subclass of'':
''recurring sporting event'', ''event interval'': ''{"amount": "+1", "unit": "http://www.wikidata.org/entity/Q577"}'',
''country'': ''United States'', ''sport'': ''badminton'', ''on focus list of Wikimedia
project'': ''WikiProject Badminton/tournament focus list''}'
sentences:
- 'arm folding method of crossing arms The manner in which a person folds their
arms is a dynamic morphological demonstration of two alternative phenotypes. Once
adopted, manner of arms folding across the chest does not change throughout the
lifetime and persons easily give up the unusual folding position, most commonly
at the first attempt. If the right arm is folded above the left forearm, the phenotype
is characterised as R (right; the right type), and in the opposite case, i.e.
if the left arm is positioned above the right, it is the phenotype L (left; left-type).
It has been shown that the phenotypes of these properties are distributed independently
with left-handed and right-handed people. {''aliases'': [''crossed arms'']} {''subclass
of'': ''gesture'', ''instance of'': ''gesture'', ''uses'': ''hand'', ''described
by source'': ''Brockhaus and Efron Encyclopedic Dictionary''}'
- 'Chilean takeover of the Strait of Magellan Chile''s takeover of the Strait in
1843 The Chilean colonization of the Strait of Magellan began in 1843 when an
expedition founded Fuerte Bulnes. In 1848 the settlement of Punta Arenas was established
further north in the strait and grew eventually to become the main settlement
in the strait, a position it holds to this day. The Chilean settlement of the
strait was crucial to establish its sovereignty claims in the area. Argentina
complained diplomatically this act in 1847, as part of the East Patagonia, Tierra
del Fuego and Strait of Magellan Dispute, and once the dispute was settled, formally
recognised Chilean sovereignty of the strait in 1881. The Magallanes territory
was made a regular Chilean province in 1928. {''aliases'': [''Takeover of the
Strait of Magellan'']} {''instance of'': ''historical event'', ''country'': ''Weimar
Republic'', ''on focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'',
''location'': ''Berlin'', ''part of'': ''German Revolution of 1918โ1919''}'
- 'Sefer HaRazim magical book given by the Angel Raziel to Noah Sefer HaRazim (Hebrew:
ืกืคืจ ืืจืืื; "Book of Secrets") is a Jewish magical text supposedly given to Noah
by the angel Raziel, and passed down throughout Biblical history until it ended
up in the possession of Solomon, for whom it was a great source of his wisdom
and purported magical powers. This is not the same work as the Sefer Raziel HaMalakh,
which was given to Adam by the same angel, although both works stem from the same
tradition, and large parts of Sefer HaRazim were incorporated into the Sefer Raziel
under its original title. It is thought to be a sourcebook for Jewish magic, calling
upon angels rather than God to perform supernatural feats. {''aliases'': [''Sepher
Ha-Razim'', ''Book of the Mysteries'']} {''instance of'': ''book'', ''language
of work or name'': ''English'', ''subclass of'': ''book'', ''country of origin'':
''United Kingdom'', ''publisher'': ''White Wolf Publishing'', ''copyright status'':
''copyrighted'', ''author'': ''Derek Lambert'', ''described by source'': ''Meyers
Konversations-Lexikon, 4th edition (1885โ1890)''}'
- source_sentence: 'Gerardus Mercator Flemish geographer, cosmographer and cartographer
(1512โ1594) Gerardus Mercator (; 5 March 1512 โ 2 December 1594) was a Flemish
geographer, cosmographer and cartographer. He is most renowned for creating the
1569 world map based on a new projection which represented sailing courses of
constant bearing (rhumb lines) as straight linesโan innovation that is still employed
in nautical charts. Mercator was a notable maker of globes and scientific instruments.
In addition, he had interests in theology, philosophy, history, mathematics, and
geomagnetism. He was also an accomplished engraver and calligrapher. Unlike other
great scholars of the age, he travelled little and his knowledge of geography
came from his library of over a thousand books and maps, from his visitors and
from his vast correspondence (in six languages) with other scholars, statesmen,
travellers, merchants and seamen. Mercator''s early maps were in large formats
suitable for wall mounting but in the second half of his life, he produced over
100 new regional maps in a smaller format suitable for binding into his Atlas
of 1595. This was the first appearance of the word Atlas in reference to a book
of maps. However, Mercator used it as a neologism for a treatise (Cosmologia)
on the creation, history and description of the universe, not simply a collection
of maps. He chose the word as a commemoration of the Titan Atlas, "King of Mauretania",
whom he considered to be the first great geographer. A large part of Mercator''s
income came from sales of terrestrial and celestial globes. For sixty years they
were considered the finest in the world, and were sold in such numbers that there
are many surviving examples. This was a substantial enterprise involving the manufacture
of the spheres, printing the gores, building substantial stands, packing and distributing
them all over Europe. He was also renowned for his scientific instruments, particularly
his astrolabes and astronomical rings used to study the geometry of astronomy
and astrology. Mercator wrote on geography, philosophy, chronology and theology.
All of the wall maps were engraved with copious text on the region concerned.
As an example, the famous world map of 1569 is inscribed with over five thousand
words in fifteen legends. The 1595 Atlas has about 120 pages of maps and illustrated
title pages, but a greater number of pages are devoted to his account of the creation
of the universe and descriptions of all the countries portrayed. His table of
chronology ran to some 400 pages fixing the dates (from the time of creation)
of earthly dynasties, major political and military events, volcanic eruptions,
earthquakes and eclipses. He also wrote on the gospels and the Old Testament.
Mercator was a devout Christian born into a Catholic family at a time when Martin
Luther''s Protestantism was gaining ground. He never declared himself as a Lutheran
but was clearly sympathetic, and he was accused of heresy by Catholic authorities;
after six months in prison he was released unscathed. This period of persecution
is probably the major factor in his move from Catholic Leuven (Louvain) to a more
tolerant Duisburg, in the Holy Roman Empire, where he lived for the last thirty
years of his life. Walter Ghim, Mercator''s friend and first biographer, describes
him as sober in his behaviour, yet cheerful and witty in company, and never more
happy than in debate with other scholars. {''name'': ''Gerardus Mercator'', ''caption'':
''Portrait by Hogenberg,1574. (Translation)'', ''alt'': ''Portrait of Gerard
Mercator'', ''birth_name'': ''Geert De Kremer'', ''birth_date'': ''5 March 1512'',
''birth_place'': ''Rupelmonde, County of Flanders'', ''death_date'': ''{{Death
date and age|df|=|yes|1594|12|2|1512|3|5}}'', ''death_place'': ''Duisburg, United
Duchies of Jรผlich-Cleves-Berg, {{avoid wrap|Holy Roman Empire}}'', ''education'':
''University of Leuven'', ''known_for'': ''{{Plainlist|\n* World map based on
the Mercator projection (1569)\n* Coining the term Atlas}}'', ''spouse'': ''{{plainlist|\n*
|marriage|Barbara Schellekens|1534|1586|end|=|d|\n* |marriage|Gertrude Vierlings|1589|}}
{{marriage|Barbara Schellekens|1534|1586|end|=|d}} * {{marriage|Gertrude Vierlings|1589}}'',
''children'': ''6, including Arnold and Rumold'', ''signature'': ''Signature of
Gerardus Mercator (1512โ1594).png'', ''aliases'': [''Gerhard Mercator'', ''Gerhard
Kremer'', ''Mercat.e'', ''Mercatore'', ''Gerard Mercator'', ''Mercator'', ''Gherardo
Mercatore'', ''Gerard Merkator'', ''Gรฉrard de Cremer'', ''Gerardus Cremers'']}
{''occupation'': ''writer'', ''instance of'': ''human'', ''sex or gender'': ''male'',
''position held'': ''United States senator'', ''described by source'': ''Obรกlky
knih'', ''copyright status as a creator'': ''copyrights on works have expired''}'
sentences:
- 'Naoko Takeuchi Japanese manga artist Naoko Takeuchi (Japanese: ๆญฆๅ
็ดๅญ, Hepburn:
Takeuchi Naoko, born March 15, 1967) is a Japanese manga artist. She is best known
as the author of Sailor Moon, one of the most popular manga series of all time.
She has won several awards, including the 1993 Kodansha Manga Award for Sailor
Moon. Takeuchi is married to Yoshihiro Togashi, the author of YuYu Hakusho and
Hunter ร Hunter. {''alias'': ''Sumire Shirobara'', ''aliases'': [''Takeuchi Naoko''],
''awards'': ''Kodansha Manga Award (1993)'', ''birth_date'': ''{{Birth date and
age|1967|3|15}}'', ''birth_place'': ''Kลfu, Yamanashi, Japan'', ''caption'': ''Takeuchi
at the 1998 San Diego Comic-Con'', ''children'': ''2'', ''native_name'': ''ๆญฆๅ
็ดๅญ'', ''notable works'': "{{unbulleted list|''''Sailor Moon''''|''''Codename:
Sailor V''''}}", ''occupation'': ''Manga artist'', ''spouse'': ''{{marriage|Yoshihiro
Togashi|1999}}'', ''years_active'': ''1986โpresent''} {''award received'': ''Inkpot
Award'', ''copyright status as a creator'': ''works protected by copyrights'',
''instance of'': ''human'', ''occupation'': ''comics artist'', ''notable work'':
''Sailor Moon''}'
- 'Prikaz military government agencies in Tsardom of Russia, 16th-17th centuries
A prikaz (Russian: ะฟัะธะบะฐฬะท; IPA: [prสฒษชหkas] , plural: prikazy) was an administrative,
judicial, territorial, or executive office functioning on behalf of palace, civil,
military, or church authorities in the Grand Duchy of Moscow and the Tsardom of
Russia from the 15th to the 18th centuries. The term usually suggests the functionality
of a modern "ministry", "office", "department", or "bureau"; however, in practice
prikaz was historically applied to most governmental organizations regardless
of their function or authority. In modern Russian, prikaz literally means an ''order''
in the meaning of ''directive'' or ''command''. Most of the prikazy were subordinated
to the boyar duma. Some of them, palace prikazy (Russian: ะดะฒะพััะพะฒัะต ะฟัะธะบะฐะทั, romanized:
dvortsovyje prikazy), were subordinated to the taynyi prikaz or pervyi prikaz,
which answered directly to the tsar of Russia. The patriarch of Moscow and all
Rus'' had his own prikazy. {''note'': ''infobox not present in Wikipedia''} {''instance
of'': ''government agency'', ''subclass of'': ''government agency'', ''country'':
''United States'', ''dissolved, abolished or demolished date'': ''{"time": "+1945-00-00T00:00:00Z",
"timezone": 0, "before": 0, "after": 0, "precision": 9, "calendarmodel": "http://www.wikidata.org/entity/Q1985727"}'',
''headquarters location'': ''Washington, D.C.''}'
- 'Chicago Public Media not-for-profit media company Chicago Public Media (CPM)
is a not-for-profit radio and print media company. CPM operates as the primary
National Public Radio member organization for Chicago. It owns three non-commercial
educational FM broadcast stations and one FM translator. In addition to local
news and information productions, it produces the programs Wait Wait... Don''t
Tell Me! for NPR stations, and This American Life which is distributed by PRX
to other radio stations. On January 30, 2022, Chicago Public Media acquired the
Chicago Sun-Times daily newspaper. {''type'': ''non-profit'', ''leader_title'':
''CEO'', ''leader_name'': ''Melissa Bell'', ''subsidiaries'': "WBEZ <br> WBEW
<br> WRTE <br> ''''Chicago Sun-Times'''' <br> ''''This American Life'''' <br>
''''Wait Wait... Don''t Tell Me!''''", ''formerly'': ''The WBEZ Alliance'', ''website'':
''{{Official URL}}'', ''abbreviation'': ''CPM'', ''tax_id'': ''36-3687394'', ''aliases'':
[''Chicago Public Radio'']} {''instance of'': ''media company'', ''country'':
''United States'', ''industry'': ''mass media'', ''grants'': "bachelor''s degree",
''language of work or name'': ''English''}'
- source_sentence: 'Beau Blackstone 1973 novel Beau Blackstone is a 1973 historical
thriller novel by the British writer Derek Lambert, published under the pen name
Richard Falkirk. It is the third in a series of six novels featuring Edmund Blackstone,
a member of the Bow Street Runners in the pre-Victorian era. Blackstone goes undercover
amongst a gang of navvies working on a new railway, and is called on for plans
to thwart the first Great Train Robbery. {''name'': ''Beau Blackstone'', ''caption'':
''First edition'', ''author'': ''Derek Lambert'', ''country'': ''United Kingdom'',
''language'': ''English'', ''series'': ''Edmund Blackstone'', ''genre'': ''Historical
thriller'', ''publisher'': ''Stein and Day'', ''release_date'': ''1973'', ''media_type'':
''Print'', ''preceded_by'': "Blackstone''s Fancy", ''followed_by'': ''Blackstone
and the Scourge of Europe''} {''instance of'': ''book'', ''language of work or
name'': ''English'', ''subclass of'': ''book'', ''country of origin'': ''United
Kingdom'', ''publisher'': ''White Wolf Publishing'', ''copyright status'': ''copyrighted'',
''author'': ''Derek Lambert'', ''described by source'': ''Meyers Konversations-Lexikon,
4th edition (1885โ1890)''}'
sentences:
- 'Bishop of Buddhist Churches of America The bishop is the highest spiritual leader
in the Jodo Shinshu organization Buddhist Churches of America The bishop of the
Buddhist Churches of America is the highest spiritual leader in the Buddhist Churches
of America (BCA). Since BCA is part of Honganji-ha, the bishop is subordinate
to the Monshu of Honganji-ha. Between 1899 and 1918 the leader of Buddhist Mission
of North America (BMNA) had the title kantoku (superintendent). 1918 the title
was changed to sochล (bishop). BMNA changed its name to Buddhist Churches of America
in 1944. {''post'': ''Bishop of the Buddhist Churches of America'', ''native_name'':
''Sochล'', ''incumbent'': ''Marvin Harada'', ''incumbentsince'': ''1 April 2020'',
''style'': ''Reverend'', ''member_of'': ''Buddhist Churches of America'', ''seat'':
''San Francisco'', ''formation'': ''1918'', ''first'': ''Kลyลซ Uchida'', ''website'':
''https://www.buddhistchurchesofamerica.org/''} {''occupation'': ''religious leader'',
''instance of'': ''human'', ''sex or gender'': ''male'', ''subclass of'': ''religious
leader'', ''country of citizenship'': ''United States'', ''languages spoken, written
or signed'': ''English'', ''canonization status'': ''saint'', ''described by source'':
''Brockhaus and Efron Encyclopedic Dictionary''}'
- 'Cold Sweat 1970 film directed by Terence Young Cold Sweat is a 1970 French-Italian
international co-production starring Charles Bronson and directed by Terence Young.
It is based on the 1959 novel Ride the Nightmare by Richard Matheson. It was filmed
in and around Beaulieu-sur-Mer. {''name'': ''Cold Sweat'', ''caption'': ''Theatrical
release poster'', ''director'': ''Terence Young'', ''screenplay'': ''Shimon Wincelberg<br
/>Jo Eisinger<br />Dorothea Bennett'', ''based_on'': "{{based on|''''Ride the
Nightmare''''|Richard Matheson}}", ''starring'': ''Charles Bronson<br />Liv Ullmann<br
/>James Mason<br />Jill Ireland'', ''producer'': ''Robert Dorfmann<br />Maurice
Jacquin'', ''music'': ''Michel Magne'', ''cinematography'': ''Jean Rabier'', ''distributor'':
''Emerson Film Enterprises'', ''released'': ''{{Film date|1970|06|14|df|=|y}}'',
''runtime'': ''94 minutes'', ''country'': ''France<br>Italy'', ''language'': ''English''}
{''instance of'': ''film'', ''color'': ''color'', ''original language of film
or TV show'': ''English'', ''genre'': ''drama film'', ''distribution format'':
''video on demand'', ''country of origin'': ''United States''}'
- 'plank flat rectangular piece of timber A plank is timber that is flat, elongated,
and rectangular with parallel faces that are higher and longer than wide. Used
primarily in carpentry, planks are critical in the construction of ships, houses,
bridges, and many other structures. Planks also serve as supports to form shelves
and tables. Usually made from timber, sawed so that the grain runs along the length,
planks are usually more than 1+1โ2 in (38 mm) thick, and are generally wider than
2+1โ2 in (64 mm). In the United States, planks can be any length and are generally
a minimum of 2ร8 (1+1โ2 in ร 7+1โ4 in or 38 mm ร 184 mm), but planks that are
2ร10 (1+1โ2 in ร 9+1โ4 in or 38 mm ร 235 mm) and 2ร12 (1+1โ2 in ร 11+1โ4 in or
38 mm ร 286 mm) are more commonly stocked by lumber retailers. Planks are often
used as a work surface on elevated scaffolding, and need to be thick enough to
provide strength without breaking when walked on. The wood is categorized as a
board if its width is less than 2+1โ2 in (64 mm), and its thickness is less than
1+1โ2 in (38 mm). A plank used in a building as a horizontal supporting member
that runs between foundations, walls, or beams to support a ceiling or floor is
called a joist. The plank was the basis of maritime transport: wood (except some
dense hardwoods) floats on water, and abundant forests meant wooden logs could
be easily obtained and processed, making planks the primary material in ship building.
However, since the 20th century, wood has largely been supplanted in ship construction
by iron and steel, to decrease cost and improve durability. {''note'': ''infobox
not present in Wikipedia''} {''subclass of'': ''building material'', ''instance
of'': ''building material'', ''described by source'': ''Encyclopรฆdia Britannica
11th edition'', ''on focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'',
''made from material'': ''concrete''}'
- source_sentence: 'Court of Appeal Icelandic appellate court The Court of Appeal
(Icelandic: Landsrรฉttur, lit. National Court) is an appellate court in Iceland
with appellate jurisdiction over all district court cases. The court was established
by the Courts Act of 2016 and began operating 1 January 2018. The establishment
introduced a three-tier judiciary in Iceland where before operated only district
courts and the Supreme Court since the 1919 abolition of the National High Court.
The court is composed of fifteen justices selected by the Qualifications Committee
and nominated by the Minister of Justice for presidential confirmation. In cases
where the minister wishes to make changes to the committee''s selection, Parliament
must approve of said changes with a simple majority vote. {''court_name'': ''Court
of Appeal'', ''native_name'': ''Landsrรฉttur'', ''established'': ''7 June 2016'',
''jurisdiction'': ''Iceland'', ''location'': ''Reykjavรญk'', ''type'': ''Presidential
appointment after Minister of Justice nomination following Qualifications Committee
selection. Parliamentary confirmation before appointment if minister nomination
differs from committee selection.'', ''authority'': ''Courts Act No. 50/2016'',
''appealsto'': ''Supreme Court'', ''appealsfrom'': ''District courts'', ''terms'':
''Life tenure'', ''positions'': ''15 (by statute)'', ''budget'': ''703.8 million
ISK (2019)'', ''website'': ''{{URL|landsrettur.is}} {{In lang|is}}'', ''chiefjudgetitle'':
''President'', ''chiefjudgename'': ''Hervรถr รorvaldsdรณttir'', ''chiefjudgetitle2'':
''Vice-President'', ''chiefjudgename2'': ''Eirรญkur Jรณnsson'', ''aliases'': [''Landsrรฉttur'']}
{''instance of'': ''government'', ''subclass of'': ''government'', ''country'':
''France'', ''applies to jurisdiction'': ''Israel''}'
sentences:
- 'inker line artist in a traditional comic book or graphic novel The inker (sometimes
credited as the finisher or embellisher) is one of the two line artists in traditional
comic book production. After the penciller creates a drawing, the inker interprets
this drawing by outlining and embellishing it with a pencil, a pen or a brush.
Inking was necessary in the traditional printing process as presses could not
reproduce pencilled drawings. Another specialist, the letterer, handles the "inking"
of text, while the colorist applies color to the final art submitted by the inker.
{''aliases'': [''finisher'', ''embellisher'', ''comic inker'', ''Inking (drawing
technique)'']} {''instance of'': ''profession'', ''subclass of'': ''comics artist''}'
- 'Zhuazhou Chinese ritual held on a child''s first birthday Zhuazhou (ๆ้ฑ โ literally,
"pick" and "anniversary", meaning "one-year-old catch" ) is a Chinese ritual held
at a child''s first birthday party, when the child is 1 year, i.e. typically twelve
months since birth (although variable reckonings as to what constitutes a year
of age for entitlement for zhuazhou exist), old. The parents put various objects
before the child. Parents will often put objects that symbolize career choices
or personality traits. The child''s choice is used to forecast its future. It
is said that this custom can be dated back to the Northern and Southern dynasties
(420-589). Yan Zhitui in his book Yanshi jiaxun ้กๆฐๅฎถ่จ ("The Family Instructions
of Master Yan") documented a custom that is very similar to Zhuazhou today. The
earliest written record of this custom can be traced back to the Song dynasty
(960-1279). It is portrayed in a well-known scene in the novel Dream of the Red
Chamber. {''t'': ''{{linktext|ๆ้ฑ}}'', ''s'': ''{{linktext|ๆๅจ}}'', ''p'': ''zhuฤzhลu'',
''w'': ''chua-cho'', ''qn'': ''thรดi nรดi''} {''instance of'': ''ritual'', ''subclass
of'': ''ritual'', ''described by source'': ''Brockhaus and Efron Encyclopedic
Dictionary'', ''country'': ''India''}'
- 'Allgemeine Zeitung des Judentums magazine Allgemeine Zeitung des Judentums (until
May 1903: Allgemeine Zeitung des Judenthums) was a Jewish German magazine devoted
to Jewish interests, founded in 1837 by Ludwig Philippson (1811โ89), published
first in Leipzig and later in Berlin. In 1860 it had a circulation of approximately
1,500. It was read not only in Germany, Austria, and the Netherlands but also
in Eastern Europe, and continued to appear until 1922. At the time of its founding,
several Jewish journals had recently been launched in Germany โ Sulamith (1806-1843),
Jedidja (1817-1831), and Abraham Geiger''s Wissenschaftliche Zeitschrift fรผr Jรผdische
Theologie (1835-1847), as well as the Unparteiische Universal-Kirchenzeitung (1837),
of Julius Vinzenz Hรถninghaus, which had a Jewish section edited by Michael Hess
and Isaac Markus Jost โ and Philippson recognized that none had kept pace with
the needs of the times. He aimed to produce a journal for the intelligent lay
person that would both advance knowledge of Jewish history and plead the cause
of the Jews of his day. The first number of the paper appeared May 2, 1837, and
was published by Baumgรคrtner in Leipzig with the subtitle "Unparteiisches Organ
fรผr Alles Jรผdische Interesse in Betreff von Politik, Religion, Literatur, Geschichte,
Sprachkunde, und Belletristik" (Impartial Organ for All Matters of Jewish Interest
Pertaining to Politics, Religion, Literature, History, Philology, and Belles-lettres).
During the first two years the paper appeared three times per week. For a year
and a half a supplement was published three times a month, devoted to literature
and homiletics. In the course of 1839 it was first published twice weekly and
then eventually became a weekly. Isidore Singer, writing in 1906, highlighted
the paper''s editorial independence, noting that it had not ever received a subsidy
from any Jewish body, and that during the revolutions of 1848, "when the publication
of nearly all other Jewish journals was interrupted, the Allgemeine Zeitung braved
the storm and spoke out plainly in the political turmoil." According to I. M.
Jost, who devoted a chapter to the journal in his Neuere Geschichte der Israeliten
(1847), the Allgemeine Zeitung "became epoch-making in Jewish history by attempting
for the first time to give a general view of the life and conditions of the Jews."
Philippson''s chief aim was the civil emancipation of the Jews, carrying on the
fight for that cause in the spirit of Gabriel Riesser''s earlier periodical Der
Jude (1832-1835). The paper was a voice for moderate religious reform, focusing
attention on the organization of religious instruction, the form of worship in
the synagogue, and the cultivation of all branches of Jewish learning. It also
advocated for closer relations with non-Jews. It exercised considerable influence
on Judaism in general, and, in particular, on the evolution of Judaism in Germany.
It played a role in the establishment of a rabbinical seminary (Lehranstalt fรผr
die Wissenschaft des Judenthums) in Berlin, and of a Jewish Publication Society
(Institut zur Fรถrderung der Israelitischen Literatur), as well as the calling
together of a Jewish synod (Leipzig, 1869). From the outset the Allgemeine Zeitung
met with success, drawing the interest of cultured Jewish circles of Germany,
Austria, and the Netherlands. Within the first months of its publication a society
of students in Leyden (Netherlands) had formed to aid its circulation, and it
even obtained several hundred subscribers in Poland. During the first years of
its existence the paper had among its collaborators a number of the most distinguished
scholars, including Gabriel Riesser, E. Carmoly, J. L. Saalschรผtz, S. D. Luzzatto,
Leopold Zunz, Leopold Dukes, Julius Fรผrst, Leopold Lรถw, Franz Delitzsch, Adolph
Jellinek, Abraham Geiger, and I. M. Jost. During the first year Phoebus Philippson,
brother of Ludwig, contributed a series of 11 articles under the title "Ideas
for an Encyclopedia and a Methodology of Jewish Theology." In the mid-1850s a
supplement was published regularly, entitled Jรผdisches Volksblatt zur Belehrung
und Unterhaltung auf Jรผdischem Gebiete (A Popular Jewish Journal for Instruction
and Entertainment on Jewish Subjects). After Philippson''s death Gustav Karpeles
assumed the editorship, beginning with the issue of February 9, 1890. Under his
tenure the paper''s interests shifted toward the lives and situation of the Jews
of Eastern Europe. At that time a change was made in the format so that the literary
part, which formed the bulk of the paper, was separated from the part containing
the news. The latter was paged separately as a supplement entitled Der Gemeindebote,
which continued to appear until 1922. In 1890 the journal was acquired by Rudolf
Mosse, and from then on published in Berlin. Later, beginning in the second half
of 1920, the journal appeared only once every two weeks. It ceased publication
with the issue of April 28, 1922, and was succeeded by the C.V.-Zeitung (C.V.-Newspaper),
the organ of the Centralverein deutscher Staatsbรผrger jรผdischen Glaubens (Central
Association of German Citizens of Jewish Faith). {''note'': ''infobox not present
in Wikipedia''} {''instance of'': ''magazine'', ''language of work or name'':
''German'', ''country of origin'': ''Germany'', ''copyright status'': ''public
domain'', ''described by source'': ''Brockhaus and Efron Encyclopedic Dictionary'',
''country'': ''Germany''}'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Alessio-Borgi/all-mpnet-base-v2-margin-based-triplet-loss-finetuned-culture-10-epochs-enhanced")
# Run inference
sentences = [
"Court of Appeal Icelandic appellate court The Court of Appeal (Icelandic: Landsrรฉttur, lit. National Court) is an appellate court in Iceland with appellate jurisdiction over all district court cases. The court was established by the Courts Act of 2016 and began operating 1 January 2018. The establishment introduced a three-tier judiciary in Iceland where before operated only district courts and the Supreme Court since the 1919 abolition of the National High Court. The court is composed of fifteen justices selected by the Qualifications Committee and nominated by the Minister of Justice for presidential confirmation. In cases where the minister wishes to make changes to the committee's selection, Parliament must approve of said changes with a simple majority vote. {'court_name': 'Court of Appeal', 'native_name': 'Landsrรฉttur', 'established': '7 June 2016', 'jurisdiction': 'Iceland', 'location': 'Reykjavรญk', 'type': 'Presidential appointment after Minister of Justice nomination following Qualifications Committee selection. Parliamentary confirmation before appointment if minister nomination differs from committee selection.', 'authority': 'Courts Act No. 50/2016', 'appealsto': 'Supreme Court', 'appealsfrom': 'District courts', 'terms': 'Life tenure', 'positions': '15 (by statute)', 'budget': '703.8 million ISK (2019)', 'website': '{{URL|landsrettur.is}} {{In lang|is}}', 'chiefjudgetitle': 'President', 'chiefjudgename': 'Hervรถr รorvaldsdรณttir', 'chiefjudgetitle2': 'Vice-President', 'chiefjudgename2': 'Eirรญkur Jรณnsson', 'aliases': ['Landsrรฉttur']} {'instance of': 'government', 'subclass of': 'government', 'country': 'France', 'applies to jurisdiction': 'Israel'}",
'Allgemeine Zeitung des Judentums magazine Allgemeine Zeitung des Judentums (until May 1903: Allgemeine Zeitung des Judenthums) was a Jewish German magazine devoted to Jewish interests, founded in 1837 by Ludwig Philippson (1811โ89), published first in Leipzig and later in Berlin. In 1860 it had a circulation of approximately 1,500. It was read not only in Germany, Austria, and the Netherlands but also in Eastern Europe, and continued to appear until 1922. At the time of its founding, several Jewish journals had recently been launched in Germany โ Sulamith (1806-1843), Jedidja (1817-1831), and Abraham Geiger\'s Wissenschaftliche Zeitschrift fรผr Jรผdische Theologie (1835-1847), as well as the Unparteiische Universal-Kirchenzeitung (1837), of Julius Vinzenz Hรถninghaus, which had a Jewish section edited by Michael Hess and Isaac Markus Jost โ and Philippson recognized that none had kept pace with the needs of the times. He aimed to produce a journal for the intelligent lay person that would both advance knowledge of Jewish history and plead the cause of the Jews of his day. The first number of the paper appeared May 2, 1837, and was published by Baumgรคrtner in Leipzig with the subtitle "Unparteiisches Organ fรผr Alles Jรผdische Interesse in Betreff von Politik, Religion, Literatur, Geschichte, Sprachkunde, und Belletristik" (Impartial Organ for All Matters of Jewish Interest Pertaining to Politics, Religion, Literature, History, Philology, and Belles-lettres). During the first two years the paper appeared three times per week. For a year and a half a supplement was published three times a month, devoted to literature and homiletics. In the course of 1839 it was first published twice weekly and then eventually became a weekly. Isidore Singer, writing in 1906, highlighted the paper\'s editorial independence, noting that it had not ever received a subsidy from any Jewish body, and that during the revolutions of 1848, "when the publication of nearly all other Jewish journals was interrupted, the Allgemeine Zeitung braved the storm and spoke out plainly in the political turmoil." According to I. M. Jost, who devoted a chapter to the journal in his Neuere Geschichte der Israeliten (1847), the Allgemeine Zeitung "became epoch-making in Jewish history by attempting for the first time to give a general view of the life and conditions of the Jews." Philippson\'s chief aim was the civil emancipation of the Jews, carrying on the fight for that cause in the spirit of Gabriel Riesser\'s earlier periodical Der Jude (1832-1835). The paper was a voice for moderate religious reform, focusing attention on the organization of religious instruction, the form of worship in the synagogue, and the cultivation of all branches of Jewish learning. It also advocated for closer relations with non-Jews. It exercised considerable influence on Judaism in general, and, in particular, on the evolution of Judaism in Germany. It played a role in the establishment of a rabbinical seminary (Lehranstalt fรผr die Wissenschaft des Judenthums) in Berlin, and of a Jewish Publication Society (Institut zur Fรถrderung der Israelitischen Literatur), as well as the calling together of a Jewish synod (Leipzig, 1869). From the outset the Allgemeine Zeitung met with success, drawing the interest of cultured Jewish circles of Germany, Austria, and the Netherlands. Within the first months of its publication a society of students in Leyden (Netherlands) had formed to aid its circulation, and it even obtained several hundred subscribers in Poland. During the first years of its existence the paper had among its collaborators a number of the most distinguished scholars, including Gabriel Riesser, E. Carmoly, J. L. Saalschรผtz, S. D. Luzzatto, Leopold Zunz, Leopold Dukes, Julius Fรผrst, Leopold Lรถw, Franz Delitzsch, Adolph Jellinek, Abraham Geiger, and I. M. Jost. During the first year Phoebus Philippson, brother of Ludwig, contributed a series of 11 articles under the title "Ideas for an Encyclopedia and a Methodology of Jewish Theology." In the mid-1850s a supplement was published regularly, entitled Jรผdisches Volksblatt zur Belehrung und Unterhaltung auf Jรผdischem Gebiete (A Popular Jewish Journal for Instruction and Entertainment on Jewish Subjects). After Philippson\'s death Gustav Karpeles assumed the editorship, beginning with the issue of February 9, 1890. Under his tenure the paper\'s interests shifted toward the lives and situation of the Jews of Eastern Europe. At that time a change was made in the format so that the literary part, which formed the bulk of the paper, was separated from the part containing the news. The latter was paged separately as a supplement entitled Der Gemeindebote, which continued to appear until 1922. In 1890 the journal was acquired by Rudolf Mosse, and from then on published in Berlin. Later, beginning in the second half of 1920, the journal appeared only once every two weeks. It ceased publication with the issue of April 28, 1922, and was succeeded by the C.V.-Zeitung (C.V.-Newspaper), the organ of the Centralverein deutscher Staatsbรผrger jรผdischen Glaubens (Central Association of German Citizens of Jewish Faith). {\'note\': \'infobox not present in Wikipedia\'} {\'instance of\': \'magazine\', \'language of work or name\': \'German\', \'country of origin\': \'Germany\', \'copyright status\': \'public domain\', \'described by source\': \'Brockhaus and Efron Encyclopedic Dictionary\', \'country\': \'Germany\'}',
'inker line artist in a traditional comic book or graphic novel The inker (sometimes credited as the finisher or embellisher) is one of the two line artists in traditional comic book production. After the penciller creates a drawing, the inker interprets this drawing by outlining and embellishing it with a pencil, a pen or a brush. Inking was necessary in the traditional printing process as presses could not reproduce pencilled drawings. Another specialist, the letterer, handles the "inking" of text, while the colorist applies color to the final art submitted by the inker. {\'aliases\': [\'finisher\', \'embellisher\', \'comic inker\', \'Inking (drawing technique)\']} {\'instance of\': \'profession\', \'subclass of\': \'comics artist\'}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,551 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 72 tokens</li><li>mean: 303.52 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 296.5 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 295.81 tokens</li><li>max: 384 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Cabinet of the French Consulate Wikimedia list article The Cabinet of the French Consulate was formed following the Coup of 18 Brumaire which replaced the Directory with the Consulate. The new regime was ratified by the adoption of the Constitution of the Year VIII on 24 December 1799 and headed by Napoleon Bonaparte as First Consul, with Jean Jacques Rรฉgis de Cambacรฉrรจs and Charles-Franรงois Lebrun serving as Second and Third Consuls respectively. {'cabinet_name': 'Cabinet of the Consulate', 'jurisdiction': 'France', 'flag': '{{flag|France}}', 'flag_border': 'true', 'caption': 'The Three Consuls.', 'date_formed': '11 November 1799', 'date_dissolved': '18 May 1804', 'government_head': 'Napoleon Bonaparte<br>Jean Jacques Rรฉgis de Cambacรฉrรจs<br>Charles-Franรงois Lebrun', 'previous': 'Government of the Directory', 'successor': 'First Cabinet of Napoleon I'} {'instance of': 'government', 'subclass of': 'government', 'country': 'France', 'applies to jurisdiction': 'Israel'}</code> | <code>Julia Carabias Mexican professor Julia Carabias Lillo (born August 11, 1954, Mexico City) is a Mexican ecologist and Environmental Conservationist. She is a professor at the National Autonomous University of Mexico and served as the Secretariat of Environment and Natural Resources under President Ernesto Zedillo from 1994 to 2000. {'name': 'Julia Carabias Lillo', 'birth_date': 'August 11, 1954', 'birth_place': 'Mexico City, Mexico', 'fields': 'Ecology and Environmental Conservation', 'workplaces': 'National Autonomous University of Mexico\nSecretariat of Environment and Natural Resources', 'alma_mater': 'National Autonomous University of Mexico, BS (1977) and MS (1981)', 'spouse': 'Josรฉ Woldenberg (divorced)', 'awards': 'J. Paul Getty Award for Conservation Leadership\nInternational Cosmos Prize\nChampions of the Earth\nBelisario Domรญnguez Medal of Honor', 'known_for': 'Environmental conservation, former Secretariat of Environment and Natural Resources', 'aliases': ['Julia Carabias Lil...</code> | <code>youth sports sport practiced by youth Youth sports is any sports event where competitors are younger than adult age, whether children or adolescents. Youth sports includes school sports at primary and secondary level, as well as sports played outside the education system, whether informally or organized. In sports studies and public policy contexts, an age limit of 18 (the age of majority) is usual in discussing "youth sport". Not all sports governing bodies define "youth" as "under-18": while the Youth Olympic Games and the FA Youth Cup are for under-18s, the LEN Junior Water Polo European Championship is for under-17s. Many youth sport programmes have multiple age levels, for example under-8, under-10, under-12, etc. It is not, however, only underage sport that may be considered as "youth" sport; for example, the existence of the World Rowing U23 Championships recognises that adults aged 18โ22 have not yet reached peak condition. Moreover, many definitions consider postsecondary/coll...</code> |
| <code>Catharism Christian dualist movement that thrived in some areas of Southern Europe Catharism ( KATH-ษr-iz-ษm; from the Ancient Greek: ฮบฮฑฮธฮฑฯฮฟฮฏ, romanized: katharoรญ, "the pure ones") was a Christian quasi-dualist and pseudo-Gnostic movement which thrived in Southern Europe, particularly in northern Italy and southern France, between the 12th and 14th centuries. Denounced as a heretical sect by the Catholic Church, its followers were attacked first by the Albigensian Crusade and later by the Medieval Inquisition, which eradicated the sect by 1350. Around 1 million were slaughtered, hanged, or burnt at the stake. Followers were known as Cathars or Albigensians, after the French city Albi where the movement first took hold, but referred to themselves as Good Christians. They famously believed that there were not one, but two Godsโthe good God of Heaven and the evil god of this age (2 Corinthians 4:4). According to tradition, Cathars believed that the good God was the God of the New Testamen...</code> | <code>Mosan art regional style of art from the Meuse river valley Mosan art is a regional style of art from the valley of the Meuse in present-day Belgium, the Netherlands, and Germany. Although in a broader sense the term applies to art from this region from all periods, it generally refers to Romanesque art, with Mosan Romanesque architecture, stone carving, metalwork, enamelling and manuscript illumination reaching a high level of development during the 11th, 12th and 13th centuries. The Meuse river valley lay in the heart of the earlier Carolingian Empire and therefore the style draws largely from the heritage of the Carolingian art tradition. Thus, Mosan art contains strong classical elements, which separates it from the international Romanesque style seen elsewhere during the period, for example in France, Germany Spain and Italy. However, it shares with mainstream Romanesque art elements such as the treatment of space. Although the iconography of 11th- and 12th-century Meuse valley ar...</code> | <code>Arrabal an area on the periphery of a city or large town An Arrabal is a Spanish word for an area on the periphery of a city or large town, a suburb. It may also refer to: Bruno Arrabal (born 1992), Brazilian footballer Fernando Arrabal (born 1932), Spanish author and filmmaker Progreso Alfarache Arrabal (1888โ1964), Andalusian anarcho-syndicalist Arrabal (Zaragoza), a district in Zaragoza, Spain Arrabal (Leiria), a parish (freguesia) in Leiria, Portugal {'aliases': ['suburb']} {'subclass of': 'neighborhood', 'part of': 'city', 'on focus list of Wikimedia project': 'Wikipedia:Vital articles/Level/4', 'Commons category': 'Downtowns and city centers', 'said to be the same as': 'central business district'}</code> |
| <code>Varpa Swedish outdoor sport Varpa is an outdoor game of physical skill that allegedly dates back to the Viking Age and survived in Gotland. It is similar to boules and horseshoes but is played with a flat and heavy object called a "varpa" instead of balls. Varpas used to be well-shaped stones, but nowadays, aluminium is more popular. A varpa can weigh between one-half and five kilograms (one and eleven pounds). The object of the game is to throw the varpa as close to a stick as possible. The stick is fifteen metres (sixteen yards) away for women and twenty metres (twenty-two yards) away for men. The game can be played individually or in teams. No official nationally sponsored varpa teams exist; however, unofficial leagues are growing in popularity among youth in suburban areas of Sweden and Norway. "Varpa" is an old word which simply means "to throw". Varpa is one of the disciplines at the annual Stรฅnga Games (Stรฅngaspelen). {'note': 'infobox not present in Wikipedia'} {'instance of': ...</code> | <code>Pescara city in Abruzzo, Central Italy Pescara (Italian: [pesหkaหra] ; Abruzzese: Pescร rรซ; Pescarese: Piscร rรซ) is the capital city of the province of Pescara, in the Abruzzo region of Italy. It is the most populated city in Abruzzo, with 118,657 (January 1, 2023) residents (and approximately 350,000 including the surrounding metropolitan area). Located on the Adriatic coast at the mouth of the River Aterno-Pescara, the present-day municipality was formed in 1927 joining the municipalities of the old Pescara fortress, the part of the city to the south of the river, and Castellamare Adriatico, the part of the city to the north of the river. The surrounding area was formed into the province of Pescara. The main commercial street of the city is Corso Umberto I, which runs between two squares, starting from Piazza della Repubblica and reaching the seacoast in Piazza Primo Maggio. The rectangle that it forms with Corso Vittorio Emanuele II and Via Nicola Fabrizi is home of the main shopping ...</code> | <code>religious epistemology approach to epistemological questions from a religious perspective Religious epistemology broadly covers religious approaches to epistemological questions, or attempts to understand the epistemological issues that come from religious belief. The questions asked by epistemologists apply to religious beliefs and propositions whether they seem rational, justified, warranted, reasonable, based on evidence and so on. Religious views also influence epistemological theories, such as in the case of Reformed epistemology. Reformed epistemology has mainly developed in contemporary Christian religious epistemology, as in the work of Alvin Plantinga (born 1932), William P. Alston (1921-2009), Nicholas Wolterstorff (born 1932) and Kelly James Clark, as a critique of and alternative to the idea of "evidentialism" of the sort proposed by W. K. Clifford (1845-1879). Alvin Plantinga, for instance, is critical of the evidentialist analysis of knowledge provided by Richard Feldman ...</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 0.5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 10
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.6105 | 500 | 0.2552 |
| 1.2210 | 1000 | 0.1448 |
| 1.8315 | 1500 | 0.0974 |
| 2.4420 | 2000 | 0.0565 |
| 3.0525 | 2500 | 0.0499 |
| 3.6630 | 3000 | 0.0298 |
| 4.2735 | 3500 | 0.0212 |
| 4.8840 | 4000 | 0.0163 |
| 5.4945 | 4500 | 0.0121 |
| 6.1050 | 5000 | 0.01 |
| 6.7155 | 5500 | 0.0062 |
| 7.3260 | 6000 | 0.0063 |
| 7.9365 | 6500 | 0.0046 |
| 8.5470 | 7000 | 0.0021 |
| 9.1575 | 7500 | 0.0021 |
| 9.7680 | 8000 | 0.0017 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
secmlr/DS-Noisy_DS-Clean_DS-OSS_QWQ-OSS_QWQ-Clean_QWQ-Noisy_Con_Qwen2.5-7B-Instruct_sft | secmlr | 2025-04-27T20:34:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T06:19:48Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: DS-Noisy_DS-Clean_DS-OSS_QWQ-OSS_QWQ-Clean_QWQ-Noisy_Con_Qwen2.5-7B-Instruct_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Noisy_DS-Clean_DS-OSS_QWQ-OSS_QWQ-Clean_QWQ-Noisy_Con_Qwen2.5-7B-Instruct_sft
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the DS-Noisy, the DS-Clean, the DS-OSS, the QWQ-OSS, the QWQ-Clean, the QWQ-Noisy and the Con datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 12
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
phospho-app/flour_picking2-nuqcy4eay0 | phospho-app | 2025-04-27T20:33:52Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-04-27T20:33:10Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 205, in predict
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 644, in get_video
trajectory_index = self.get_trajectory_index(trajectory_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index
raise ValueError(
ValueError: Error finding trajectory index for 4, found trajectory_indices=array([4, 5])
0%| | 0/180 [00:01<?, ?it/s]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 207, in predict
raise RuntimeError(e)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 644, in get_video
trajectory_index = self.get_trajectory_index(trajectory_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index
raise ValueError(
ValueError: Error finding trajectory index for 4, found trajectory_indices=array([4, 5])
0%| | 0/180 [00:01<?, ?it/s]
```
## Training parameters:
- **Dataset**: [arturaah/flour_picking2](https://huggingface.co/datasets/arturaah/flour_picking2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 506
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx | mlx-community | 2025-04-27T20:31:35Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"conversational",
"code",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:quantized:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"6-bit",
"region:us"
] | text-generation | 2025-04-27T20:25:54Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: mlx
---
# mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx
This model [mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx) was
converted to MLX format from [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
stream-21-aina-asif-Go-Viral-Link/Original.Viral.Clip.aina.asif.Viral.Video.Leaks.official | stream-21-aina-asif-Go-Viral-Link | 2025-04-27T20:31:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T20:29:33Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/mw5wvsaa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Veera Dheera Sooran (Kannada) OTT Release: Here's When & Where To Watch Vikram's Film Online, DEETS
Veera Dheera Sooran will also be available to stream on Kannada, Tamil, Telugu, Malayalam, and Hindi
My Hero Academia: Vigilantes Episode 3 Release Date & Time: Here's When To Watch New Episode, Storyline & More
In India, one can stream My Hero Academia: Vigilantes Episode 3 on Crunchyroll.
Crushology 101 Episode 4 OTT Release Time: Hereโs When & Where New Episode Will Premiere Online In India
Crushology 101 Episode 4 India Release Time: Featuring Roh Jeong-eui, Lee Chae-min, Kim Hyun-jin, Kim |
aleegis/330c9a7b-3ce6-430e-b40e-9839df5802c7 | aleegis | 2025-04-27T20:30:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T19:14:23Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 330c9a7b-3ce6-430e-b40e-9839df5802c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- a0c2b0c2a290b3a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a0c2b0c2a290b3a0_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/330c9a7b-3ce6-430e-b40e-9839df5802c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/a0c2b0c2a290b3a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 3002708a-6c81-44ee-8407-ea206b5cbbd2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3002708a-6c81-44ee-8407-ea206b5cbbd2
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 330c9a7b-3ce6-430e-b40e-9839df5802c7
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LuckyLukke/grpo_turn_level_onesided_1_starter_change-80 | LuckyLukke | 2025-04-27T20:24:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:21:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_1_starter_change-240 | LuckyLukke | 2025-04-27T20:24:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:21:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
reshinthadith/local-repo-coder-v0 | reshinthadith | 2025-04-27T20:22:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T20:19:02Z | ---
library_name: transformers
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
---
# Model Card for Model ID
Generates and Edits minimal multi-file python code. Right now consistently generates upto 2-3 files with a runner.sh bash script that orchestrates the file. Maintains the PEP-8 style.
## Model Details
### Model Description
- **Developed by:** Reshinth Adithyan
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/reshinthadithyan/repo-level-code/tree/main
### Generated Format
The model generates the repository in the following format, Code to parse it and make a repository is also given below
```txt
<libs>pytorch,wandb</libs>
<planning>PLANNING AS MARKDOWN FORMAT</planning>
<requirements>>CONTENT FOR THE REQS FILE HERE</requirements>
<output><file1>src/dataset.py<content>YOUR PYTHON CODE HERE</content></file1>
<file2>src/model.py<content>YOUR PYTHON CODE HERE</content></file2>
<bashfile>run.sh<content>python3 src/model.py</content></bashfile></output>
```
## Example
An Example generated code is given [here](https://huggingface.co/reshinthadith/local-repo-coder-v0/blob/main/example.txt). This using the script below is processed to,
```text
Repository generated at: ./output_dir/demo2
demo2/
run.sh
src/
visualize_timeseries.py
```
### Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import fire
from pathlib import Path
import os
import re
def generate_repo_from_string(input_str: str, output_dir: str) -> None:
"""
Parse <output> tags in the input string and write files (and bashfiles) to the specified output directory.
- Searches for <output>...</output> section.
- Within that, finds all <fileX> or <bashfile> tags:
<file1>path/to/file.ext<content>...file content...</content></file1>
<bashfile>script.sh<content>...script content...</content></bashfile>
Args:
input_str: The full string containing <output> markup.
output_dir: Directory where files will be created. Existing files will be overwritten.
"""
# Extract the content inside <output>...</output>
out_match = re.search(r"<output>(.*?)</output>", input_str, re.DOTALL)
if not out_match:
raise ValueError("No <output> section found in input.")
output_section = out_match.group(1)
# Regex to find file tags: file1, file2, file3, ... and bashfile
pattern = re.compile(
r"<(file\d+|bashfile)>([^<]+?)<content>(.*?)</content></\1>",
re.DOTALL
)
for tag, filename, content in pattern.findall(output_section):
# Determine full path
file_path = os.path.join(output_dir, filename.strip())
# Ensure parent directory exists
parent = os.path.dirname(file_path)
if parent:
os.makedirs(parent, exist_ok=True)
# Write content to file
with open(file_path, 'w', encoding='utf-8') as f:
# Strip only one leading newline if present
f.write(content.lstrip('\n'))
print(f"Repository generated at: {output_dir}")
def main(model_path:str="./models_dir/repo_coder_v1",
prompt:str="Generate a small python repo for matplotlib to visualize timeseries data to read from timeseries.csv file using polars."
,output_path="./output_dir/demo2"):
input_prompt = "###Instruction: {prompt}".format(prompt=prompt)
def load_model(model_path):
"""
Load the model and tokenizer from the specified path.
"""
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto").to("cuda:0")
model.eval()
return model, tokenizer
model, tokenizer = load_model(model_path)
print(f"Loaded model from {model_path}.")
input = tokenizer(input_prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**input, max_length=1024, do_sample=True, temperature=0.9, top_p=0.95, top_k=50)
generated_code_repo = tokenizer.decode(output[0], skip_special_tokens=True)
print(f"Generated code repo: {generated_code_repo}")
Path(output_path).mkdir(parents=True, exist_ok=True)
generate_repo_from_string(generated_code_repo, output_path)
def list_files(startpath):
for root, dirs, files in os.walk(startpath):
level = root.replace(startpath, '').count(os.sep)
indent = ' ' * 4 * (level)
print('{}{}/'.format(indent, os.path.basename(root)))
subindent = ' ' * 4 * (level + 1)
for f in files:
print('{}{}'.format(subindent, f))
list_files(output_path)
if __name__ == "__main__":
fire.Fire(main)
``` |
mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx | mlx-community | 2025-04-27T20:21:43Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"conversational",
"code",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:quantized:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"8-bit",
"region:us"
] | text-generation | 2025-04-27T20:15:00Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
library_name: mlx
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx
This model [mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx) was
converted to MLX format from [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
shibajustfor/44f46e18-c475-41f3-a6cc-6863c1a1cc69 | shibajustfor | 2025-04-27T20:15:29Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"region:us"
] | null | 2025-04-27T20:15:15Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/SmolLM-1.7B
model-index:
- name: shibajustfor/44f46e18-c475-41f3-a6cc-6863c1a1cc69
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/44f46e18-c475-41f3-a6cc-6863c1a1cc69
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf | RichardErkhov | 2025-04-27T20:15:16Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T18:51:48Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-7b-instruct-cpt-v5.1-v1.0 - GGUF
- Model creator: https://huggingface.co/homeb82784/
- Original model: https://huggingface.co/homeb82784/qwen2.5-7b-instruct-cpt-v5.1-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q2_K.gguf) | Q2_K | 2.81GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K.gguf) | Q3_K | 3.55GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_0.gguf) | Q4_0 | 4.13GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_K.gguf) | Q4_K | 4.36GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q4_1.gguf) | Q4_1 | 4.54GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_0.gguf) | Q5_0 | 4.95GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_K.gguf) | Q5_K | 5.07GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q5_1.gguf) | Q5_1 | 5.36GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q6_K.gguf) | Q6_K | 5.82GB |
| [qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-cpt-v5.1-v1.0-gguf/blob/main/qwen2.5-7b-instruct-cpt-v5.1-v1.0.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eduardo-bolognini/imagecaptioning | eduardo-bolognini | 2025-04-27T20:14:16Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-26T23:22:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/GetTheRubberNextG-44u87chuio | phospho-app | 2025-04-27T20:14:16Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-04-27T19:36:48Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/opt/conda/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/nebo1337/GetTheRubberNextG/preupload/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/src/helper.py", line 367, in predict
api.upload_file(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1624, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4662, in upload_file
commit_info = self.create_commit(
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1624, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4193, in create_commit
self.preupload_lfs_files(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4416, in preupload_lfs_files
_fetch_upload_modes(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 680, in _fetch_upload_modes
hf_raise_for_status(resp)
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 459, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-680e9017-454b16ff3fa641a441110427;7b5d905f-fdf6-490d-a5c4-61ed9a661f37)
Repository Not Found for url: https://huggingface.co/api/models/nebo1337/GetTheRubberNextG/preupload/main.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Note: Creating a commit assumes that the repo already exists on the Huggingface Hub. Please use `create_repo` if it's not the case.
```
## Training parameters:
- **Dataset**: [nebo1337/GetTheRubberNextG](https://huggingface.co/datasets/nebo1337/GetTheRubberNextG)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 1743
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
1y33/toy-llm | 1y33 | 2025-04-27T20:09:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T20:04:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
1shoomun/semant-cache-updated | 1shoomun | 2025-04-27T20:09:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2620",
"loss:MultipleNegativesRankingLoss",
"loss:CosineSimilarityLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:jinaai/jina-embedding-b-en-v1",
"base_model:finetune:jinaai/jina-embedding-b-en-v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-27T20:08:03Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2620
- loss:MultipleNegativesRankingLoss
- loss:CosineSimilarityLoss
base_model: jinaai/jina-embedding-b-en-v1
widget:
- source_sentence: What sector am I most heavily invested in?
sentences:
- 'Show me how to switch my stock portfolio to mutual funds
'
- What percentage of my portfolio is in X
- Which sector do I invest most in?
- source_sentence: Can you tell me how my portfolio ranks among others?
sentences:
- What is my AMC wise split ?
- In which funds am I paying highest fees
- Compare my portfolio with others?
- source_sentence: Which of my funds has the highest risk level?
sentences:
- Give me python code to find best funds in my portfolio
- Show my stocks ranked by performance
- Show my riskiest mutual funds
- source_sentence: What's going right with my portfolio?
sentences:
- Is my portfolio linked?
- My portfolio returns over all the years
- What's going well in my portfolio
- source_sentence: I'd like to know the percentage of large cap in my investments.
sentences:
- Show my riskiest holdings
- Can you show what percentage of my portfolio consists of large cap
- What is the expected return of my portfolio?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on jinaai/jina-embedding-b-en-v1
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: test eval
type: test-eval
metrics:
- type: cosine_accuracy@1
value: 0.8625954198473282
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9961832061068703
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8625954198473282
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33206106870229
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8625954198473282
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9961832061068703
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9460250731496836
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9271628498727736
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9271628498727736
name: Cosine Map@100
---
# SentenceTransformer based on jinaai/jina-embedding-b-en-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embedding-b-en-v1](https://huggingface.co/jinaai/jina-embedding-b-en-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [jinaai/jina-embedding-b-en-v1](https://huggingface.co/jinaai/jina-embedding-b-en-v1) <!-- at revision 32aa658e5ceb90793454d22a57d8e3a14e699516 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"I'd like to know the percentage of large cap in my investments.",
'Can you show what percentage of my portfolio consists of large cap',
'Show my riskiest holdings',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `test-eval`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8626 |
| cosine_accuracy@3 | 0.9962 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8626 |
| cosine_precision@3 | 0.3321 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8626 |
| cosine_recall@3 | 0.9962 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.946** |
| cosine_mrr@10 | 0.9272 |
| cosine_map@100 | 0.9272 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### Unnamed Dataset
* Size: 1,310 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.62 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.06 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------|:-------------------------------------------------------------------|:-----------------|
| <code>are there any of my funds that are lagging behind</code> | <code>do I hold any funds that haven't been performing well</code> | <code>1.0</code> |
| <code>Which sectors are performing the best in my portfolio?</code> | <code>What are my best performing sectors?</code> | <code>1.0</code> |
| <code>List some of my top holdings</code> | <code>Show some of my best performing holdings</code> | <code>1.0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### Unnamed Dataset
* Size: 1,310 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.68 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.13 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------|:----------------------------------------------------------|:-----------------|
| <code>I need my portfolio to hit 1000% returns by next month</code> | <code>make my portfolio return 1000% by next month</code> | <code>1.0</code> |
| <code>What are my stocks?</code> | <code>Show my stocks</code> | <code>1.0</code> |
| <code>I'd like to know my sector distribution.</code> | <code>What is my sector allocation?</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 15
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | test-eval_cosine_ndcg@10 |
|:-------:|:----:|:-------------:|:------------------------:|
| 1.0 | 82 | - | 0.8929 |
| 2.0 | 164 | - | 0.9007 |
| 3.0 | 246 | - | 0.9112 |
| 4.0 | 328 | - | 0.9188 |
| 5.0 | 410 | - | 0.9285 |
| 6.0 | 492 | - | 0.9286 |
| 6.0976 | 500 | 0.2352 | 0.9291 |
| 7.0 | 574 | - | 0.9356 |
| 8.0 | 656 | - | 0.9404 |
| 9.0 | 738 | - | 0.9406 |
| 10.0 | 820 | - | 0.9434 |
| 11.0 | 902 | - | 0.9424 |
| 12.0 | 984 | - | 0.9455 |
| 12.1951 | 1000 | 0.164 | 0.9460 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
jonkinesis/kaianoir | jonkinesis | 2025-04-27T20:08:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-04-27T02:53:24Z | ---
license: creativeml-openrail-m
---
|
barryvelleda/moi | barryvelleda | 2025-04-27T19:54:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-27T19:42:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EWEN
---
# Moi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EWEN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EWEN",
"lora_weights": "https://huggingface.co/barryvelleda/moi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('barryvelleda/moi', weight_name='lora.safetensors')
image = pipeline('EWEN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1145
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/barryvelleda/moi/discussions) to add images that show off what youโve made with this LoRA.
|
Mohammad12141000/Sote12141000 | Mohammad12141000 | 2025-04-27T19:48:46Z | 0 | 0 | null | [
"automatic-speech-recognition",
"fa",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-04-27T19:37:51Z | ---
license: apache-2.0
language:
- fa
pipeline_tag: automatic-speech-recognition
--- |
3mily1u/new-codegen-350m-mono-dpoed-control-25-0.5 | 3mily1u | 2025-04-27T19:47:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:47:11Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASENlw/flux-Fine_adjustment | ASENlw | 2025-04-27T19:47:34Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-25T03:42:42Z | ---
license: other
license_name: flux.1-dev
license_link: LICENSE
---
|
3mily1u/new-codegen-350m-mono-dpoed-attack-10-0.5 | 3mily1u | 2025-04-27T19:43:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:42:24Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HanningZhang/Qwen2.5-Math-7B-raft-plusplus_cliphigher032_em-iter3 | HanningZhang | 2025-04-27T19:42:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:39:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vmpsergio/d0e2ac9b-94db-488a-bb10-f242252cc72d | vmpsergio | 2025-04-27T19:41:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-27T19:20:41Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0e2ac9b-94db-488a-bb10-f242252cc72d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 49955f9bfdb4305e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/49955f9bfdb4305e_train_data.json
type:
field_input: tools
field_instruction: query
field_output: answers
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/d0e2ac9b-94db-488a-bb10-f242252cc72d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/49955f9bfdb4305e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 87871ecd-532d-4323-8d86-dd165a6705f0
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 87871ecd-532d-4323-8d86-dd165a6705f0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d0e2ac9b-94db-488a-bb10-f242252cc72d
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.065 | 0.0296 | 200 | 0.0581 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
72-Gangu-Chettri-Kanda-Go-Viral-Link/Original.Viral.Clip.Gangu.Chettri.Kanda.Viral.Video.Leaks.official | 72-Gangu-Chettri-Kanda-Go-Viral-Link | 2025-04-27T19:40:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T19:39:43Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/37xdy3ks?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Gangu Chettri Kanda 7.2 link trending across the internet! Here's a verified, safe, and community-guided explanation of whatโs happening, why itโs trending, and how the public is reacting. In this SEO-friendly post, we break down the context, sources, and the social impact behind the viral topic. Our content complies with Blogger, AdSense, and Google policies, focusing on responsible sharing and educational awareness. Whether youโre curious or just want a clear explanation, weโve got you coveredโno explicit content, no harmful links, just real talk. Stay informed and stay safe online.
|
goatley/sentiment-final-model | goatley | 2025-04-27T19:37:29Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-26T18:19:00Z | ---
library_name: transformers
license: mit
---
# Model Card for `goatley/sentiment-final-model`
This model is a fine-tuned **DistilBERT** model for **binary sentiment classification** (positive/negative) of English text reviews.
It was developed as part of an advanced NLP dashboard project demonstrating applied skills in deep learning, NLP engineering, and full-stack app deployment.
## Model Details
### Model Description
- **Developed by:** Keith Goatley
- **License:** MIT
- **Model type:** DistilBERT-based Sequence Classification (Binary)
- **Language(s):** English
- **Fine-tuned from:** `distilbert-base-uncased`
- **Base model:** Hugging Face Transformers v4
- **Framework:** PyTorch
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/Keithgoatley/sentiment-analysis-app)
- **Demo:** [Hugging Face Space (when deployed)](https://huggingface.co/spaces/goatley/sentiment-analysis-dashboard)
## Uses
### Direct Use
- Classifying short text reviews (e.g., Amazon product reviews) into **positive** or **negative** sentiment.
### Downstream Use
- Embedding inside sentiment-driven recommendation engines
- As a component of multi-task NLP dashboards
- Fine-tuning for domain-specific sentiment (e.g., medical, finance, hospitality reviews)
### Out-of-Scope Use
- Not designed for languages other than English.
- Not suited for emotion detection beyond binary sentiment.
## Bias, Risks, and Limitations
This model was fine-tuned on Amazon reviews, which may carry biases toward product-related expressions and cultural language patterns.
Users should be cautious when applying the model outside typical e-commerce datasets.
### Recommendations
For more robust domain generalization:
- Further fine-tuning on task-specific datasets is advised.
## How to Get Started with the Model
```python
from transformers import pipeline
classifier = pipeline(
"sentiment-analysis",
model="goatley/sentiment-final-model",
tokenizer="goatley/sentiment-final-model"
)
classifier(["I love this!", "This was awful."])
Training Details
Training Data
Subset of Amazon Reviews Dataset
Balanced 50/50 split of positive and negative reviews
Approximately 5,000 examples used for fine-tuning
Training Procedure
Fine-tuned for 3 epochs
Learning rate scheduling with warmup
Optimizer: AdamW
Batch size: 16
Device: CPU-based training (GitHub Codespaces)
Training Hyperparameters
Learning Rate: 5e-5
Optimizer: AdamW
Max Sequence Length: 512
Epochs: 3
Evaluation
Testing Data
Held-out test split from the Amazon Reviews dataset
Metrics
Metric Score
Test Accuracy 85%
Evaluation was performed using basic classification metrics (accuracy, precision, recall, F1-score).
Environmental Impact
Hardware Type: CPU (GitHub Codespaces)
Hours Used: ~2 hours
Cloud Provider: GitHub (Microsoft Azure backend)
Compute Region: North America
Carbon Emitted: Negligible (very small dataset + CPU-only fine-tuning)
Technical Specifications
Model Architecture and Objective
Architecture: DistilBERT Transformer Encoder
Task Objective: Sequence classification with 2 labels (positive, negative)
Compute Infrastructure
Training performed on GitHub Codespaces virtual machines.
No GPUs were used.
Software Environment
Hugging Face transformers==4.51.3
Datasets datasets==3.5.0
PyTorch torch==2.6.0
Citation
If you use this model or find it helpful, please cite:
APA:
Goatley, K. (2025). Sentiment Analysis Fine-Tuned DistilBERT Model [Model]. Hugging Face. https://huggingface.co/goatley/sentiment-final-model
BibTeX:
@misc{goatley2025sentiment,
author = {Keith Goatley},
title = {Sentiment
Analysis Fine-Tuned DistilBERT Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/goatley/sentiment-final-model}}
}
Model Card Authors
Keith Goatley
Contact
For questions or inquiries, please contact via:
GitHub: https://github.com/Keithgoatley
Hugging Face: https://huggingface.co/goatley
|
mradermacher/openthoughts2-GGUF | mradermacher | 2025-04-27T19:35:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlfoundations-dev/openthoughts2",
"base_model:quantized:mlfoundations-dev/openthoughts2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T18:52:02Z | ---
base_model: mlfoundations-dev/openthoughts2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlfoundations-dev/openthoughts2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/openthoughts2-GGUF/resolve/main/openthoughts2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AdityaAnupGupta/patent_lora_model | AdityaAnupGupta | 2025-04-27T19:33:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2025-04-27T19:27:08Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Alphatao/8c1091bc-e0a2-4da7-86c8-1f5f1a432899 | Alphatao | 2025-04-27T19:32:01Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T15:01:47Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
library_name: transformers
model_name: 8c1091bc-e0a2-4da7-86c8-1f5f1a432899
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 8c1091bc-e0a2-4da7-86c8-1f5f1a432899
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alphatao/8c1091bc-e0a2-4da7-86c8-1f5f1a432899", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/6z8o1iuj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
96sanjay/Reinforce-cartpole-v1 | 96sanjay | 2025-04-27T19:31:04Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-27T19:30:45Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 213.10 +/- 66.67
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ruzel23/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_bristly_dolphin | Ruzel23 | 2025-04-27T19:30:26Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am freckled bristly dolphin",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T23:16:20Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_bristly_dolphin
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am freckled bristly dolphin
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_bristly_dolphin
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ruzel23/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_bristly_dolphin", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
3mily1u/new-codegen-350m-mono-dpoed-control-50-0.1 | 3mily1u | 2025-04-27T19:29:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:28:48Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fengyao1909/scp_sft_0.5b | fengyao1909 | 2025-04-27T19:28:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:27:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
marialvsantiago/970aac2e-8543-43dd-97a7-cb70f84045d0 | marialvsantiago | 2025-04-27T19:22:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-27T19:17:24Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 970aac2e-8543-43dd-97a7-cb70f84045d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a0c2b0c2a290b3a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a0c2b0c2a290b3a0_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/970aac2e-8543-43dd-97a7-cb70f84045d0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a0c2b0c2a290b3a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3002708a-6c81-44ee-8407-ea206b5cbbd2
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 3002708a-6c81-44ee-8407-ea206b5cbbd2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 970aac2e-8543-43dd-97a7-cb70f84045d0
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4215 | 0.0992 | 200 | 0.4918 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PingVortex/Youtube-shorts-comment-generator | PingVortex | 2025-04-27T19:18:16Z | 43 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"art",
"emoji",
"brainrot",
"en",
"fr",
"tr",
"dataset:PingVortex/Youtube_shorts_comments",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:cc0-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-26T16:25:01Z | ---
license: cc0-1.0
language:
- en
- fr
- tr
tags:
- art
- emoji
- brainrot
- text-generation
pretty_name: DistilGPT2 fine-tuned on YouTube Shorts comments
size_categories:
- 10M<n<100M
datasets:
- PingVortex/Youtube_shorts_comments
base_model:
- distilbert/distilgpt2
pipeline_tag: text-generation
library_name: transformers
---
# Youtube shorts comment generator ๐ง (I couldn't come up with a more original name)
A **fine-tuned DistilGPT2 model** trained on 1.4M+ YouTube Shorts comments - the perfect language model for generating cursed internet humor, emoji spam, and authentic YouTube degeneracy.
- Base model: [distilgpt2](https://huggingface.co/distilgpt2)
- Trained on: [YouTube Shorts Comments Dataset](https://huggingface.co/datasets/PingVortex/Youtube_shorts_comments)
- Creator: [PingVortex](https://github.com/PingVortex)
## Model Details ๐ฅ
- **Parameters**: 82M (DistilGPT2 architecture)
- **Training Data**: 1,475,500 YouTube Shorts comments
- **Special Skills**: Emoji generation, broken English, random character generation
## Usage Example ๐
```python
from transformers import pipeline
brainrot = pipeline('text-generation', model='PingVortex/Youtube-shorts-comment-generator')
output = brainrot("When you see a Sigma edit:", max_length=50)
print(output[0]['generated_text'])
```
*Sample output:*
`"When you see a Sigma edit: ๐๐๐๐ The white one on the last pic?๐๐๐๐
๐
๐
๐๐๐๐
๐ฎ๐ฎ๐
"`
## Training Info โ๏ธ
- **Epochs**: 1
- **Batch Size**: 8
- **Hardware**: Google Colab T4 GPU
- **Training Time**: ~2 hours
- **Loss**: 0.24
## Ethical Considerations โ ๏ธ
This model may generate:
- Extreme emoji spam (๐ฅ๐๐คฃ)
- Nonsensical combinations
- Mild brain damage
- Occasional coherent text
Use responsibly (or irresponsibly, we don't judge).
## License ๐
**CC0 1.0 Universal** (Public Domain)
*Go nuts - no restrictions*
## Shoutouts ๐
- Subscribe to [FaceDev](https://youtube.com/@FaceDevStuff)
- Join my [Discord](https://discord.gg/At3CcCqcR2) |
Subsets and Splits