modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Trarose/my_awesome_model | Trarose | 2025-04-27T22:36:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T21:38:57Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2324
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2228 | 1.0 | 1563 | 0.2040 | 0.9226 |
| 0.1468 | 2.0 | 3126 | 0.2324 | 0.9315 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
rdoshi21/detr-finetuned-franka2 | rdoshi21 | 2025-04-27T22:29:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-04-27T21:57:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zanetwice/suzylittlemusiclora01_lg_hc_1250steps | zanetwice | 2025-04-27T22:26:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-27T22:10:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SUZYLITTLEMUSICLORA01
---
# Suzylittlemusiclora01_Lg_Hc_1250Steps
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SUZYLITTLEMUSICLORA01` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SUZYLITTLEMUSICLORA01",
"lora_weights": "https://huggingface.co/zanetwice/suzylittlemusiclora01_lg_hc_1250steps/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zanetwice/suzylittlemusiclora01_lg_hc_1250steps', weight_name='lora.safetensors')
image = pipeline('SUZYLITTLEMUSICLORA01').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1250
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/zanetwice/suzylittlemusiclora01_lg_hc_1250steps/discussions) to add images that show off what you’ve made with this LoRA.
|
gogo5142367/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito | gogo5142367 | 2025-04-27T22:24:20Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mighty striped mosquito",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T10:39:11Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mighty striped mosquito
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gogo5142367/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_striped_mosquito", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
3mily1u/new-codegen-350m-mono-dpoed-control-50-1 | 3mily1u | 2025-04-27T22:22:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T22:21:29Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm2_gen1_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-27T22:17:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T22:17:25Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MoyYuan/Varifocal-Reranking-Answer-Metadata | MoyYuan | 2025-04-27T22:09:08Z | 0 | 0 | null | [
"pytorch",
"bert",
"en",
"dataset:MoyYuan/Varifocal-Reranking",
"license:mit",
"region:us"
] | null | 2025-04-27T21:49:11Z | ---
license: mit
datasets:
- MoyYuan/Varifocal-Reranking
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/Varifocal for README information. |
borisloktev/qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset | borisloktev | 2025-04-27T22:08:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:borisloktev/qwen2_5-3B-fb16-extraction-new-schema",
"base_model:finetune:borisloktev/qwen2_5-3B-fb16-extraction-new-schema",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T21:05:16Z | ---
base_model: borisloktev/qwen2_5-3B-fb16-extraction-new-schema
library_name: transformers
model_name: qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset
This model is a fine-tuned version of [borisloktev/qwen2_5-3B-fb16-extraction-new-schema](https://huggingface.co/borisloktev/qwen2_5-3B-fb16-extraction-new-schema).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="borisloktev/qwen2_5-3B-bf16ft-base-pinpointing-lora-fix-prompt_big_full_json_output_ft_base_full_dataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/5-plus/qwen2_5-3B-pinpointing/runs/e3bi9nd9)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ianaraujo/bert-portuguese-asset-management-sentiment | ianaraujo | 2025-04-27T22:07:40Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"region:us"
] | text-classification | 2025-04-27T22:03:28Z | ---
license: mit
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
pipeline_tag: text-classification
--- |
mlx-community/CodeLlama-13b-hf-8bit-mlx | mlx-community | 2025-04-27T21:56:04Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"code",
"base_model:codellama/CodeLlama-13b-hf",
"base_model:quantized:codellama/CodeLlama-13b-hf",
"license:llama2",
"8-bit",
"region:us"
] | text-generation | 2025-04-27T21:43:49Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
library_name: mlx
base_model: codellama/CodeLlama-13b-hf
---
# mlx-community/CodeLlama-13b-hf-8bit-mlx
This model [mlx-community/CodeLlama-13b-hf-8bit-mlx](https://huggingface.co/mlx-community/CodeLlama-13b-hf-8bit-mlx) was
converted to MLX format from [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-13b-hf-8bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
THP2903/Qwen2-VL-2B-Instruct_impression_v2 | THP2903 | 2025-04-27T21:52:15Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-17T01:07:12Z | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: Qwen2-VL-2B-Instruct_impression_v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2-VL-2B-Instruct_impression_v2
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="THP2903/Qwen2-VL-2B-Instruct_impression_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phucth290303-pythera/Qwen2-VL-2B-Instruct_impression/runs/cl6klszk)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pictgencustomer/mardigras_270 | pictgencustomer | 2025-04-27T21:51:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-27T21:51:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mardigras_michaeluffer_2
---
# Mardigras_270
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mardigras_michaeluffer_2` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgencustomer/mardigras_270', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mlfoundations-dev/c1_science_0d_1s_0.3k | mlfoundations-dev | 2025-04-27T21:50:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:47:43Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_0d_1s_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_0d_1s_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_0d_1s_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
relrurel30/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest | relrurel30 | 2025-04-27T21:46:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scaly aquatic wildebeest",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T13:12:57Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scaly aquatic wildebeest
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="relrurel30/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nyuuzyou/Qwen2.5-0.5B-Bluesky | nyuuzyou | 2025-04-27T21:43:47Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"bluesky",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:alpindale/two-million-bluesky-posts",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-29T10:04:42Z | ---
base_model: Qwen/Qwen2.5-0.5B
tags:
- bluesky
datasets:
- alpindale/two-million-bluesky-posts
license: other
pipeline_tag: text-generation
library_name: transformers
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-0.5B-Bluesky
This model is a fine-tuned version of the [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) language model on the [alpindale/two-million-bluesky-posts](https://huggingface.co/datasets/alpindale/two-million-bluesky-posts) dataset.
**License**: Dataset usage is subject to Bluesky's Terms of Service. |
RocktimMBZ/Qwen2_5-planning_no_state | RocktimMBZ | 2025-04-27T21:42:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-27T21:38:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AchrafAzzaouiRiceU/t5_base_ec23_4-27 | AchrafAzzaouiRiceU | 2025-04-27T21:42:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-27T21:42:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/QwQ-32B-Brh-i1-GGUF | mradermacher | 2025-04-27T21:41:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Yobenboben/QwQ-32B-Brh",
"base_model:quantized:Yobenboben/QwQ-32B-Brh",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-27T16:33:23Z | ---
base_model: Yobenboben/QwQ-32B-Brh
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Yobenboben/QwQ-32B-Brh
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QwQ-32B-Brh-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Brh-i1-GGUF/resolve/main/QwQ-32B-Brh.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlfoundations-dev/c1_math_nod_4s_10k | mlfoundations-dev | 2025-04-27T21:37:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:35:11Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_math_nod_4s_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_math_nod_4s_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_math_nod_4s_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0a0+b465a5843b.nv24.09
- Datasets 3.5.0
- Tokenizers 0.20.3
|
jcorblaz/ppo-Huggy | jcorblaz | 2025-04-27T21:32:54Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-04-27T21:32:44Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jcorblaz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
10-NEW-EXCLUSIVE-HOT-CLIP/FULL.VIDEO.LINK.Samiya.Hijab.Viral.Video.Leaks.official | 10-NEW-EXCLUSIVE-HOT-CLIP | 2025-04-27T21:29:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T21:28:49Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/24tm3bsa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Samiya Hijab Viral Video Trending: watch, Full Story, Facts & Public Reaction
Table of content
Discover the real story behind the Samiya Hijab viral video that's trending across social media. What happened, why it's viral, and public response – all here.
The Samiya Hijab viral video has captured widespread attention online, creating waves on platforms like TikTok, Instagram, and Twitter. In this post, we will explore what the video is about, why it became viral, and how it reflects social trends and public sentiments. This post follows Blogger, AdSense, and SEO guidelines and contains no explicit content. It's focused on information, awareness, and responsible reporting while keeping our audience updated with accurate details. |
kitoide/null | kitoide | 2025-04-27T21:25:34Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-27T05:31:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 0PJZY
---
# Null
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `0PJZY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "0PJZY",
"lora_weights": "https://huggingface.co/kitoide/null/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kitoide/null', weight_name='lora.safetensors')
image = pipeline('0PJZY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/kitoide/null/discussions) to add images that show off what you’ve made with this LoRA.
|
fats-fme/4beb9e35-cfc6-4fd4-b960-32d49a7b1fb2 | fats-fme | 2025-04-27T21:25:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T21:16:59Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4beb9e35-cfc6-4fd4-b960-32d49a7b1fb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ebcd56ac8fce6f94_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ebcd56ac8fce6f94_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/4beb9e35-cfc6-4fd4-b960-32d49a7b1fb2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ebcd56ac8fce6f94_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0aa825e4-726f-4f0f-8053-b15f7220120d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0aa825e4-726f-4f0f-8053-b15f7220120d
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 4beb9e35-cfc6-4fd4-b960-32d49a7b1fb2
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.9875 |
| 0.7561 | 0.0249 | 100 | 0.6718 |
| 0.2382 | 0.0497 | 200 | 0.2416 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Tikadilko/Kolodotvor | Tikadilko | 2025-04-27T21:22:25Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"biology",
"chemistry",
"dataset:nvidia/OpenCodeReasoning",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:adapter:deepseek-ai/DeepSeek-V3-0324",
"license:bsd",
"region:us"
] | null | 2025-04-27T21:18:12Z | ---
license: bsd
datasets:
- nvidia/OpenCodeReasoning
metrics:
- bertscore
base_model:
- deepseek-ai/DeepSeek-V3-0324
new_version: deepseek-ai/DeepSeek-V3-0324
library_name: adapter-transformers
tags:
- biology
- chemistry
--- |
rnzrwnmgbry/rnz | rnzrwnmgbry | 2025-04-27T21:13:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T21:13:05Z | ---
license: apache-2.0
---
|
attilaultzindur/garbage-classifier | attilaultzindur | 2025-04-27T20:57:47Z | 0 | 0 | null | [
"pytorch",
"ResNet",
"image-classification",
"region:us"
] | image-classification | 2025-04-27T15:52:24Z | ---
tags:
- image-classification
pipeline_tag: image-classification
---
# Garbage Classification Model
A ResNet50 model fine-tuned on the Garbage Classification dataset.
## Model Details
- Base Architecture: ResNet50
- Input Size: 224x224
- Classes: battery, biological, cardboard, clothes, glass, metal, paper, plastic, shoes, trash
## Usage
```python
from transformers import pipeline
from PIL import Image
# Create pipeline
classifier = pipeline("image-classification", model="YOUR_USERNAME/garbage-classification")
# Load image
image = Image.open("test_image.jpg")
# Make prediction
result = classifier(image)
print(f"Prediction: {result[0]['label']}, Confidence: {result[0]['score']:.2f}")
```
|
3mily1u/new-codegen-350m-mono-dpoed-control-50-0.5 | 3mily1u | 2025-04-27T20:57:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:56:17Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/Dia-1.6B-fp16 | mlx-community | 2025-04-27T20:54:54Z | 0 | 1 | mlx | [
"mlx",
"safetensors",
"text-to-speech",
"en",
"base_model:nari-labs/Dia-1.6B",
"base_model:finetune:nari-labs/Dia-1.6B",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-04-27T20:51:12Z | ---
license: apache-2.0
language:
- en
base_model:
- nari-labs/Dia-1.6B
pipeline_tag: text-to-speech
library_name: mlx
---
# mlx-community/Dia-1.6B-fp16
This model was converted to MLX format from [`nari-labs/Dia-1.6B`]() using mlx-audio version **0.1.0**.
Refer to the [original model card](https://huggingface.co/nari-labs/Dia-1.6B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-audio
```
```bash
python -m mlx_audio.tts.generate --model mlx-community/Dia-1.6B-fp16 \
--text "[S1] Dia is an open weights text to dialogue model. [S2] You get full control over scripts and voices. [S1] Wow. Amazing. (laughs) [S2] Try it now on Git hub or Hugging Face."
```
|
vaibhav1411/gpt2_medium_finetuned_fake_news | vaibhav1411 | 2025-04-27T20:54:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T12:01:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maksf8486/b63d5bb1-3c0b-4edb-9a0c-4766b6d27b42 | maksf8486 | 2025-04-27T20:53:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-27T20:39:24Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b63d5bb1-3c0b-4edb-9a0c-4766b6d27b42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3bdf1006153b40c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3bdf1006153b40c_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: false
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: maksf8486/b63d5bb1-3c0b-4edb-9a0c-4766b6d27b42
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/e3bdf1006153b40c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d1ee2bb-73b6-4b79-854d-d4bb37d1c5c4
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 3d1ee2bb-73b6-4b79-854d-d4bb37d1c5c4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b63d5bb1-3c0b-4edb-9a0c-4766b6d27b42
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0602 | 0.1464 | 200 | 0.9876 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/d1_science_shortest | mlfoundations-dev | 2025-04-27T20:44:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T14:40:52Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_shortest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_shortest
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_shortest dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
Alessio-Borgi/all-mpnet-base-v2-margin-based-triplet-loss-finetuned-culture-10-epochs-enhanced | Alessio-Borgi | 2025-04-27T20:38:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6551",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-27T20:38:31Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6551
- loss:TripletLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
- source_sentence: 'Vechornytsia Slavic traditional social gathering Vechornytsi (Ukrainian:
вечорниці, from вечір "evening") are Ukrainian traditional gatherings with music,
songs, jokes and rituals. Vechornytsi traditionally began in late September, after
the seasonal agricultural work was over. Young people from villages gathered in
the evenings for entertainment. There were everyday and festive vechornytsi. During
everyday parties, people created folk art objects like rushnyky while entertaining
themselves by singing songs or telling jokes. During festive vechornytsi, rich
dinners were cooked, and there was music and dancing. It was the ladies'' responsibility
to cook dinner and the men''s responsibility to provide everybody with music,
drinks, and sweets. Vechornytsi were mainly for younger people. Here they not
only entertained each other during long winter evenings but also met new people,
communicated more closely, and found partners. Each street in a village had at
least one house for vechornytsi. Even small remote farms could have a few houses
for parties because it was a custom that brothers and sisters could not attend
the same vechornytsi. {''aliases'': [''Vechornytsi'']} {''instance of'': ''ritual'',
''subclass of'': ''ritual'', ''described by source'': ''Brockhaus and Efron Encyclopedic
Dictionary'', ''country'': ''India''}'
sentences:
- 'Rudolph Goclenius German philosopher (1547-1628) Rudolph Goclenius the Elder
(Latin: Rudolphus Goclenius; born Rudolf Gockel or Göckel; 1 March 1547 – 8 June
1628) was a German scholastic philosopher. He is sometimes credited with coining
the term psychology in 1590, though the term had been used by Pier Nicola Castellani
and Gerhard Synellius 65 years earlier. {''aliases'': [''Rudolph Goclenius the
Elder'']} {''instance of'': ''human'', ''occupation'': ''philosopher'', ''sex
or gender'': ''male'', ''languages spoken, written or signed'': ''German'', ''country
of citizenship'': ''Germany'', ''described by source'': ''Allgemeine Deutsche
Biographie''}'
- 'Fasole cu cârnaţi Romanian dish Fasole cu cârnați ("beans with sausages", Romanian
pronunciation: [faˈsole ku kɨrˈnat͡sʲ]) is a popular Romanian dish, consisting
of baked beans and sausages. A variation replaces the sausages with afumătură
(smoked meat). Also a traditional Army dish, fasole cu cârnați is prepared by
Army cooks and served freely to the crowds during the National Day celebrations
(on 1 December) in Bucharest and Alba Iulia. The main ingredients for this dish
are: beans, smoked pork, carrots, onions, tomatoes, parsnip, tomato sauce and
bay leaf. {''name'': ''Fasole cu cârnați'', ''caption'': ''Fasole cu cârnați'',
''country'': ''Romania 25px|link=Cuisine of Romania'', ''course'': ''Main course'',
''served'': ''Hot'', ''main_ingredient'': ''Sausages, beans, tomato sauce, tomatoes,
carrots, onions, parsnip, bay leaves''} {''subclass of'': ''food'', ''instance
of'': ''food'', ''has part(s)'': ''flour'', ''maintained by WikiProject'': ''WikiProject
Intangible Cultural Heritage'', ''course'': ''main course''}'
- 'bicycle pedal-driven two-wheel vehicle A bicycle, also called a pedal cycle,
bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven,
single-track vehicle, with two wheels attached to a frame, one behind the other.
A bicycle rider is called a cyclist, or bicyclist. Bicycles were introduced in
the 19th century in Europe. By the early 21st century there were more than 1 billion
bicycles. There are many more bicycles than cars. Bicycles are the principal means
of transport in many regions. They also provide a popular form of recreation,
and have been adapted for use as children''s toys. Bicycles are used for fitness,
military and police applications, courier services, bicycle racing, and artistic
cycling. The basic shape and configuration of a typical upright or "safety" bicycle,
has changed little since the first chain-driven model was developed around 1885.
However, many details have been improved, especially since the advent of modern
materials and computer-aided design. These have allowed for a proliferation of
specialized designs for many types of cycling. In the 21st century, electric bicycles
have become popular. The bicycle''s invention has had an enormous effect on society,
both in terms of culture and of advancing modern industrial methods. Several components
that played a key role in the development of the automobile were initially invented
for use in the bicycle, including ball bearings, pneumatic tires, chain-driven
sprockets, and tension-spoked wheels. {''aliases'': [''bike'', ''pushbike'', ''pedal
bike'', ''pedal cycle'', ''cycle'', ''🚲'', ''Bicycles''], ''application'': '':Transportation'',
''caption'': ''The most popular bicycle model—and most popular vehicle of any
kind in the world—is the Chinese Flying Pigeon, with about 500 million produced.'',
''classification'': ''Vehicle'', ''components'': ''Frame, wheels, tires, saddle,
handlebar, pedals, drivetrain'', ''free_label'': ''Types'', ''free_text'': ''Utility
bicycle, mountain bicycle, racing bicycle, touring bicycle, hybrid bicycle, cruiser
bicycle, BMX bike, tandem, low rider, tall bike, fixed gear, folding bicycle,
amphibious cycle, cargo bike, recumbent, electric bicycle'', ''fuel_source'':
''Human-power (and/or motor-power)'', ''image_upright'': ''1.35'', ''invented'':
''19th century'', ''inventor'': ''Karl von Drais, Kirkpatrick MacMillan'', ''name'':
''Bicycle'', ''wheels'': ''2''} {''instance of'': ''mode of transport'', ''on
focus list of Wikimedia project'': ''Wikipedia:List of articles all languages
should have'', ''described by source'': ''Metropolitan Museum of Art Tagging Vocabulary''}'
- source_sentence: 'Slovak Figure Skating Championships recurring figure skating competition
The Slovak Figure Skating Championships (Slovak: Majstrovstvá Slovenska v krasokorčuľovaní)
are an annual figure skating competition organized by he Slovak Figure Skating
Association (Slovak: Slovensky Krasokorčuliarsky Zväz) to crown the national champions
of Slovakia. The senior-level championships are held in coordination with the
skating federations of Hungary, Poland, and Slovakia as part of the Four Nationals
Figure Skating Championships. The first Slovak Championships held after the dissolution
of Czechoslovakia took place in 1993. The Czech Skating Association and the Slovak
Figure Skating Association joined their national championships in 2007. Poland
joined in 2009 and Hungary in 2014. Senior-level skaters from the four nations
compete at the Four Nationals Championships, and then the results are split to
form national podiums for each nation. Medals are awarded in men''s singles, women''s
singles, pair skating, and ice dance at the senior level, and in pair skating
and ice dance at the junior level, although not every discipline is held every
year due to a lack of participants. Junior-level singles skaters and novice-level
skaters in all disciplines compete in a separate competition that is exclusive
to Slovakia. {''name'': ''Slovak Figure Skating Championships'', ''logo'': ''Slovak
Figure Skating Association Logo.jpg'', ''logo_alt'': ''Logo of the Slovak Figure
Skating Association'', ''status'': ''Active'', ''genre'': ''National championships'',
''frequency'': ''Annual'', ''country'': ''{{SVK}}'', ''first'': ''1994'', ''prev'':
''2025 Four Nationals Championships'', ''organised'': ''Slovak Figure Skating
Association''} {''instance of'': ''recurring sporting event'', ''subclass of'':
''recurring sporting event'', ''event interval'': ''{"amount": "+1", "unit": "http://www.wikidata.org/entity/Q577"}'',
''country'': ''United States'', ''sport'': ''badminton'', ''on focus list of Wikimedia
project'': ''WikiProject Badminton/tournament focus list''}'
sentences:
- 'arm folding method of crossing arms The manner in which a person folds their
arms is a dynamic morphological demonstration of two alternative phenotypes. Once
adopted, manner of arms folding across the chest does not change throughout the
lifetime and persons easily give up the unusual folding position, most commonly
at the first attempt. If the right arm is folded above the left forearm, the phenotype
is characterised as R (right; the right type), and in the opposite case, i.e.
if the left arm is positioned above the right, it is the phenotype L (left; left-type).
It has been shown that the phenotypes of these properties are distributed independently
with left-handed and right-handed people. {''aliases'': [''crossed arms'']} {''subclass
of'': ''gesture'', ''instance of'': ''gesture'', ''uses'': ''hand'', ''described
by source'': ''Brockhaus and Efron Encyclopedic Dictionary''}'
- 'Chilean takeover of the Strait of Magellan Chile''s takeover of the Strait in
1843 The Chilean colonization of the Strait of Magellan began in 1843 when an
expedition founded Fuerte Bulnes. In 1848 the settlement of Punta Arenas was established
further north in the strait and grew eventually to become the main settlement
in the strait, a position it holds to this day. The Chilean settlement of the
strait was crucial to establish its sovereignty claims in the area. Argentina
complained diplomatically this act in 1847, as part of the East Patagonia, Tierra
del Fuego and Strait of Magellan Dispute, and once the dispute was settled, formally
recognised Chilean sovereignty of the strait in 1881. The Magallanes territory
was made a regular Chilean province in 1928. {''aliases'': [''Takeover of the
Strait of Magellan'']} {''instance of'': ''historical event'', ''country'': ''Weimar
Republic'', ''on focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'',
''location'': ''Berlin'', ''part of'': ''German Revolution of 1918–1919''}'
- 'Sefer HaRazim magical book given by the Angel Raziel to Noah Sefer HaRazim (Hebrew:
ספר הרזים; "Book of Secrets") is a Jewish magical text supposedly given to Noah
by the angel Raziel, and passed down throughout Biblical history until it ended
up in the possession of Solomon, for whom it was a great source of his wisdom
and purported magical powers. This is not the same work as the Sefer Raziel HaMalakh,
which was given to Adam by the same angel, although both works stem from the same
tradition, and large parts of Sefer HaRazim were incorporated into the Sefer Raziel
under its original title. It is thought to be a sourcebook for Jewish magic, calling
upon angels rather than God to perform supernatural feats. {''aliases'': [''Sepher
Ha-Razim'', ''Book of the Mysteries'']} {''instance of'': ''book'', ''language
of work or name'': ''English'', ''subclass of'': ''book'', ''country of origin'':
''United Kingdom'', ''publisher'': ''White Wolf Publishing'', ''copyright status'':
''copyrighted'', ''author'': ''Derek Lambert'', ''described by source'': ''Meyers
Konversations-Lexikon, 4th edition (1885–1890)''}'
- source_sentence: 'Gerardus Mercator Flemish geographer, cosmographer and cartographer
(1512–1594) Gerardus Mercator (; 5 March 1512 – 2 December 1594) was a Flemish
geographer, cosmographer and cartographer. He is most renowned for creating the
1569 world map based on a new projection which represented sailing courses of
constant bearing (rhumb lines) as straight lines—an innovation that is still employed
in nautical charts. Mercator was a notable maker of globes and scientific instruments.
In addition, he had interests in theology, philosophy, history, mathematics, and
geomagnetism. He was also an accomplished engraver and calligrapher. Unlike other
great scholars of the age, he travelled little and his knowledge of geography
came from his library of over a thousand books and maps, from his visitors and
from his vast correspondence (in six languages) with other scholars, statesmen,
travellers, merchants and seamen. Mercator''s early maps were in large formats
suitable for wall mounting but in the second half of his life, he produced over
100 new regional maps in a smaller format suitable for binding into his Atlas
of 1595. This was the first appearance of the word Atlas in reference to a book
of maps. However, Mercator used it as a neologism for a treatise (Cosmologia)
on the creation, history and description of the universe, not simply a collection
of maps. He chose the word as a commemoration of the Titan Atlas, "King of Mauretania",
whom he considered to be the first great geographer. A large part of Mercator''s
income came from sales of terrestrial and celestial globes. For sixty years they
were considered the finest in the world, and were sold in such numbers that there
are many surviving examples. This was a substantial enterprise involving the manufacture
of the spheres, printing the gores, building substantial stands, packing and distributing
them all over Europe. He was also renowned for his scientific instruments, particularly
his astrolabes and astronomical rings used to study the geometry of astronomy
and astrology. Mercator wrote on geography, philosophy, chronology and theology.
All of the wall maps were engraved with copious text on the region concerned.
As an example, the famous world map of 1569 is inscribed with over five thousand
words in fifteen legends. The 1595 Atlas has about 120 pages of maps and illustrated
title pages, but a greater number of pages are devoted to his account of the creation
of the universe and descriptions of all the countries portrayed. His table of
chronology ran to some 400 pages fixing the dates (from the time of creation)
of earthly dynasties, major political and military events, volcanic eruptions,
earthquakes and eclipses. He also wrote on the gospels and the Old Testament.
Mercator was a devout Christian born into a Catholic family at a time when Martin
Luther''s Protestantism was gaining ground. He never declared himself as a Lutheran
but was clearly sympathetic, and he was accused of heresy by Catholic authorities;
after six months in prison he was released unscathed. This period of persecution
is probably the major factor in his move from Catholic Leuven (Louvain) to a more
tolerant Duisburg, in the Holy Roman Empire, where he lived for the last thirty
years of his life. Walter Ghim, Mercator''s friend and first biographer, describes
him as sober in his behaviour, yet cheerful and witty in company, and never more
happy than in debate with other scholars. {''name'': ''Gerardus Mercator'', ''caption'':
''Portrait by Hogenberg,1574. (Translation)'', ''alt'': ''Portrait of Gerard
Mercator'', ''birth_name'': ''Geert De Kremer'', ''birth_date'': ''5 March 1512'',
''birth_place'': ''Rupelmonde, County of Flanders'', ''death_date'': ''{{Death
date and age|df|=|yes|1594|12|2|1512|3|5}}'', ''death_place'': ''Duisburg, United
Duchies of Jülich-Cleves-Berg, {{avoid wrap|Holy Roman Empire}}'', ''education'':
''University of Leuven'', ''known_for'': ''{{Plainlist|\n* World map based on
the Mercator projection (1569)\n* Coining the term Atlas}}'', ''spouse'': ''{{plainlist|\n*
|marriage|Barbara Schellekens|1534|1586|end|=|d|\n* |marriage|Gertrude Vierlings|1589|}}
{{marriage|Barbara Schellekens|1534|1586|end|=|d}} * {{marriage|Gertrude Vierlings|1589}}'',
''children'': ''6, including Arnold and Rumold'', ''signature'': ''Signature of
Gerardus Mercator (1512–1594).png'', ''aliases'': [''Gerhard Mercator'', ''Gerhard
Kremer'', ''Mercat.e'', ''Mercatore'', ''Gerard Mercator'', ''Mercator'', ''Gherardo
Mercatore'', ''Gerard Merkator'', ''Gérard de Cremer'', ''Gerardus Cremers'']}
{''occupation'': ''writer'', ''instance of'': ''human'', ''sex or gender'': ''male'',
''position held'': ''United States senator'', ''described by source'': ''Obálky
knih'', ''copyright status as a creator'': ''copyrights on works have expired''}'
sentences:
- 'Naoko Takeuchi Japanese manga artist Naoko Takeuchi (Japanese: 武内 直子, Hepburn:
Takeuchi Naoko, born March 15, 1967) is a Japanese manga artist. She is best known
as the author of Sailor Moon, one of the most popular manga series of all time.
She has won several awards, including the 1993 Kodansha Manga Award for Sailor
Moon. Takeuchi is married to Yoshihiro Togashi, the author of YuYu Hakusho and
Hunter × Hunter. {''alias'': ''Sumire Shirobara'', ''aliases'': [''Takeuchi Naoko''],
''awards'': ''Kodansha Manga Award (1993)'', ''birth_date'': ''{{Birth date and
age|1967|3|15}}'', ''birth_place'': ''Kōfu, Yamanashi, Japan'', ''caption'': ''Takeuchi
at the 1998 San Diego Comic-Con'', ''children'': ''2'', ''native_name'': ''武内
直子'', ''notable works'': "{{unbulleted list|''''Sailor Moon''''|''''Codename:
Sailor V''''}}", ''occupation'': ''Manga artist'', ''spouse'': ''{{marriage|Yoshihiro
Togashi|1999}}'', ''years_active'': ''1986–present''} {''award received'': ''Inkpot
Award'', ''copyright status as a creator'': ''works protected by copyrights'',
''instance of'': ''human'', ''occupation'': ''comics artist'', ''notable work'':
''Sailor Moon''}'
- 'Prikaz military government agencies in Tsardom of Russia, 16th-17th centuries
A prikaz (Russian: прика́з; IPA: [prʲɪˈkas] , plural: prikazy) was an administrative,
judicial, territorial, or executive office functioning on behalf of palace, civil,
military, or church authorities in the Grand Duchy of Moscow and the Tsardom of
Russia from the 15th to the 18th centuries. The term usually suggests the functionality
of a modern "ministry", "office", "department", or "bureau"; however, in practice
prikaz was historically applied to most governmental organizations regardless
of their function or authority. In modern Russian, prikaz literally means an ''order''
in the meaning of ''directive'' or ''command''. Most of the prikazy were subordinated
to the boyar duma. Some of them, palace prikazy (Russian: дворцовые приказы, romanized:
dvortsovyje prikazy), were subordinated to the taynyi prikaz or pervyi prikaz,
which answered directly to the tsar of Russia. The patriarch of Moscow and all
Rus'' had his own prikazy. {''note'': ''infobox not present in Wikipedia''} {''instance
of'': ''government agency'', ''subclass of'': ''government agency'', ''country'':
''United States'', ''dissolved, abolished or demolished date'': ''{"time": "+1945-00-00T00:00:00Z",
"timezone": 0, "before": 0, "after": 0, "precision": 9, "calendarmodel": "http://www.wikidata.org/entity/Q1985727"}'',
''headquarters location'': ''Washington, D.C.''}'
- 'Chicago Public Media not-for-profit media company Chicago Public Media (CPM)
is a not-for-profit radio and print media company. CPM operates as the primary
National Public Radio member organization for Chicago. It owns three non-commercial
educational FM broadcast stations and one FM translator. In addition to local
news and information productions, it produces the programs Wait Wait... Don''t
Tell Me! for NPR stations, and This American Life which is distributed by PRX
to other radio stations. On January 30, 2022, Chicago Public Media acquired the
Chicago Sun-Times daily newspaper. {''type'': ''non-profit'', ''leader_title'':
''CEO'', ''leader_name'': ''Melissa Bell'', ''subsidiaries'': "WBEZ <br> WBEW
<br> WRTE <br> ''''Chicago Sun-Times'''' <br> ''''This American Life'''' <br>
''''Wait Wait... Don''t Tell Me!''''", ''formerly'': ''The WBEZ Alliance'', ''website'':
''{{Official URL}}'', ''abbreviation'': ''CPM'', ''tax_id'': ''36-3687394'', ''aliases'':
[''Chicago Public Radio'']} {''instance of'': ''media company'', ''country'':
''United States'', ''industry'': ''mass media'', ''grants'': "bachelor''s degree",
''language of work or name'': ''English''}'
- source_sentence: 'Beau Blackstone 1973 novel Beau Blackstone is a 1973 historical
thriller novel by the British writer Derek Lambert, published under the pen name
Richard Falkirk. It is the third in a series of six novels featuring Edmund Blackstone,
a member of the Bow Street Runners in the pre-Victorian era. Blackstone goes undercover
amongst a gang of navvies working on a new railway, and is called on for plans
to thwart the first Great Train Robbery. {''name'': ''Beau Blackstone'', ''caption'':
''First edition'', ''author'': ''Derek Lambert'', ''country'': ''United Kingdom'',
''language'': ''English'', ''series'': ''Edmund Blackstone'', ''genre'': ''Historical
thriller'', ''publisher'': ''Stein and Day'', ''release_date'': ''1973'', ''media_type'':
''Print'', ''preceded_by'': "Blackstone''s Fancy", ''followed_by'': ''Blackstone
and the Scourge of Europe''} {''instance of'': ''book'', ''language of work or
name'': ''English'', ''subclass of'': ''book'', ''country of origin'': ''United
Kingdom'', ''publisher'': ''White Wolf Publishing'', ''copyright status'': ''copyrighted'',
''author'': ''Derek Lambert'', ''described by source'': ''Meyers Konversations-Lexikon,
4th edition (1885–1890)''}'
sentences:
- 'Bishop of Buddhist Churches of America The bishop is the highest spiritual leader
in the Jodo Shinshu organization Buddhist Churches of America The bishop of the
Buddhist Churches of America is the highest spiritual leader in the Buddhist Churches
of America (BCA). Since BCA is part of Honganji-ha, the bishop is subordinate
to the Monshu of Honganji-ha. Between 1899 and 1918 the leader of Buddhist Mission
of North America (BMNA) had the title kantoku (superintendent). 1918 the title
was changed to sochō (bishop). BMNA changed its name to Buddhist Churches of America
in 1944. {''post'': ''Bishop of the Buddhist Churches of America'', ''native_name'':
''Sochō'', ''incumbent'': ''Marvin Harada'', ''incumbentsince'': ''1 April 2020'',
''style'': ''Reverend'', ''member_of'': ''Buddhist Churches of America'', ''seat'':
''San Francisco'', ''formation'': ''1918'', ''first'': ''Kōyū Uchida'', ''website'':
''https://www.buddhistchurchesofamerica.org/''} {''occupation'': ''religious leader'',
''instance of'': ''human'', ''sex or gender'': ''male'', ''subclass of'': ''religious
leader'', ''country of citizenship'': ''United States'', ''languages spoken, written
or signed'': ''English'', ''canonization status'': ''saint'', ''described by source'':
''Brockhaus and Efron Encyclopedic Dictionary''}'
- 'Cold Sweat 1970 film directed by Terence Young Cold Sweat is a 1970 French-Italian
international co-production starring Charles Bronson and directed by Terence Young.
It is based on the 1959 novel Ride the Nightmare by Richard Matheson. It was filmed
in and around Beaulieu-sur-Mer. {''name'': ''Cold Sweat'', ''caption'': ''Theatrical
release poster'', ''director'': ''Terence Young'', ''screenplay'': ''Shimon Wincelberg<br
/>Jo Eisinger<br />Dorothea Bennett'', ''based_on'': "{{based on|''''Ride the
Nightmare''''|Richard Matheson}}", ''starring'': ''Charles Bronson<br />Liv Ullmann<br
/>James Mason<br />Jill Ireland'', ''producer'': ''Robert Dorfmann<br />Maurice
Jacquin'', ''music'': ''Michel Magne'', ''cinematography'': ''Jean Rabier'', ''distributor'':
''Emerson Film Enterprises'', ''released'': ''{{Film date|1970|06|14|df|=|y}}'',
''runtime'': ''94 minutes'', ''country'': ''France<br>Italy'', ''language'': ''English''}
{''instance of'': ''film'', ''color'': ''color'', ''original language of film
or TV show'': ''English'', ''genre'': ''drama film'', ''distribution format'':
''video on demand'', ''country of origin'': ''United States''}'
- 'plank flat rectangular piece of timber A plank is timber that is flat, elongated,
and rectangular with parallel faces that are higher and longer than wide. Used
primarily in carpentry, planks are critical in the construction of ships, houses,
bridges, and many other structures. Planks also serve as supports to form shelves
and tables. Usually made from timber, sawed so that the grain runs along the length,
planks are usually more than 1+1⁄2 in (38 mm) thick, and are generally wider than
2+1⁄2 in (64 mm). In the United States, planks can be any length and are generally
a minimum of 2×8 (1+1⁄2 in × 7+1⁄4 in or 38 mm × 184 mm), but planks that are
2×10 (1+1⁄2 in × 9+1⁄4 in or 38 mm × 235 mm) and 2×12 (1+1⁄2 in × 11+1⁄4 in or
38 mm × 286 mm) are more commonly stocked by lumber retailers. Planks are often
used as a work surface on elevated scaffolding, and need to be thick enough to
provide strength without breaking when walked on. The wood is categorized as a
board if its width is less than 2+1⁄2 in (64 mm), and its thickness is less than
1+1⁄2 in (38 mm). A plank used in a building as a horizontal supporting member
that runs between foundations, walls, or beams to support a ceiling or floor is
called a joist. The plank was the basis of maritime transport: wood (except some
dense hardwoods) floats on water, and abundant forests meant wooden logs could
be easily obtained and processed, making planks the primary material in ship building.
However, since the 20th century, wood has largely been supplanted in ship construction
by iron and steel, to decrease cost and improve durability. {''note'': ''infobox
not present in Wikipedia''} {''subclass of'': ''building material'', ''instance
of'': ''building material'', ''described by source'': ''Encyclopædia Britannica
11th edition'', ''on focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'',
''made from material'': ''concrete''}'
- source_sentence: 'Court of Appeal Icelandic appellate court The Court of Appeal
(Icelandic: Landsréttur, lit. National Court) is an appellate court in Iceland
with appellate jurisdiction over all district court cases. The court was established
by the Courts Act of 2016 and began operating 1 January 2018. The establishment
introduced a three-tier judiciary in Iceland where before operated only district
courts and the Supreme Court since the 1919 abolition of the National High Court.
The court is composed of fifteen justices selected by the Qualifications Committee
and nominated by the Minister of Justice for presidential confirmation. In cases
where the minister wishes to make changes to the committee''s selection, Parliament
must approve of said changes with a simple majority vote. {''court_name'': ''Court
of Appeal'', ''native_name'': ''Landsréttur'', ''established'': ''7 June 2016'',
''jurisdiction'': ''Iceland'', ''location'': ''Reykjavík'', ''type'': ''Presidential
appointment after Minister of Justice nomination following Qualifications Committee
selection. Parliamentary confirmation before appointment if minister nomination
differs from committee selection.'', ''authority'': ''Courts Act No. 50/2016'',
''appealsto'': ''Supreme Court'', ''appealsfrom'': ''District courts'', ''terms'':
''Life tenure'', ''positions'': ''15 (by statute)'', ''budget'': ''703.8 million
ISK (2019)'', ''website'': ''{{URL|landsrettur.is}} {{In lang|is}}'', ''chiefjudgetitle'':
''President'', ''chiefjudgename'': ''Hervör Þorvaldsdóttir'', ''chiefjudgetitle2'':
''Vice-President'', ''chiefjudgename2'': ''Eiríkur Jónsson'', ''aliases'': [''Landsréttur'']}
{''instance of'': ''government'', ''subclass of'': ''government'', ''country'':
''France'', ''applies to jurisdiction'': ''Israel''}'
sentences:
- 'inker line artist in a traditional comic book or graphic novel The inker (sometimes
credited as the finisher or embellisher) is one of the two line artists in traditional
comic book production. After the penciller creates a drawing, the inker interprets
this drawing by outlining and embellishing it with a pencil, a pen or a brush.
Inking was necessary in the traditional printing process as presses could not
reproduce pencilled drawings. Another specialist, the letterer, handles the "inking"
of text, while the colorist applies color to the final art submitted by the inker.
{''aliases'': [''finisher'', ''embellisher'', ''comic inker'', ''Inking (drawing
technique)'']} {''instance of'': ''profession'', ''subclass of'': ''comics artist''}'
- 'Zhuazhou Chinese ritual held on a child''s first birthday Zhuazhou (抓週 – literally,
"pick" and "anniversary", meaning "one-year-old catch" ) is a Chinese ritual held
at a child''s first birthday party, when the child is 1 year, i.e. typically twelve
months since birth (although variable reckonings as to what constitutes a year
of age for entitlement for zhuazhou exist), old. The parents put various objects
before the child. Parents will often put objects that symbolize career choices
or personality traits. The child''s choice is used to forecast its future. It
is said that this custom can be dated back to the Northern and Southern dynasties
(420-589). Yan Zhitui in his book Yanshi jiaxun 顏氏家訓 ("The Family Instructions
of Master Yan") documented a custom that is very similar to Zhuazhou today. The
earliest written record of this custom can be traced back to the Song dynasty
(960-1279). It is portrayed in a well-known scene in the novel Dream of the Red
Chamber. {''t'': ''{{linktext|抓週}}'', ''s'': ''{{linktext|抓周}}'', ''p'': ''zhuāzhōu'',
''w'': ''chua-cho'', ''qn'': ''thôi nôi''} {''instance of'': ''ritual'', ''subclass
of'': ''ritual'', ''described by source'': ''Brockhaus and Efron Encyclopedic
Dictionary'', ''country'': ''India''}'
- 'Allgemeine Zeitung des Judentums magazine Allgemeine Zeitung des Judentums (until
May 1903: Allgemeine Zeitung des Judenthums) was a Jewish German magazine devoted
to Jewish interests, founded in 1837 by Ludwig Philippson (1811–89), published
first in Leipzig and later in Berlin. In 1860 it had a circulation of approximately
1,500. It was read not only in Germany, Austria, and the Netherlands but also
in Eastern Europe, and continued to appear until 1922. At the time of its founding,
several Jewish journals had recently been launched in Germany – Sulamith (1806-1843),
Jedidja (1817-1831), and Abraham Geiger''s Wissenschaftliche Zeitschrift für Jüdische
Theologie (1835-1847), as well as the Unparteiische Universal-Kirchenzeitung (1837),
of Julius Vinzenz Höninghaus, which had a Jewish section edited by Michael Hess
and Isaac Markus Jost – and Philippson recognized that none had kept pace with
the needs of the times. He aimed to produce a journal for the intelligent lay
person that would both advance knowledge of Jewish history and plead the cause
of the Jews of his day. The first number of the paper appeared May 2, 1837, and
was published by Baumgärtner in Leipzig with the subtitle "Unparteiisches Organ
für Alles Jüdische Interesse in Betreff von Politik, Religion, Literatur, Geschichte,
Sprachkunde, und Belletristik" (Impartial Organ for All Matters of Jewish Interest
Pertaining to Politics, Religion, Literature, History, Philology, and Belles-lettres).
During the first two years the paper appeared three times per week. For a year
and a half a supplement was published three times a month, devoted to literature
and homiletics. In the course of 1839 it was first published twice weekly and
then eventually became a weekly. Isidore Singer, writing in 1906, highlighted
the paper''s editorial independence, noting that it had not ever received a subsidy
from any Jewish body, and that during the revolutions of 1848, "when the publication
of nearly all other Jewish journals was interrupted, the Allgemeine Zeitung braved
the storm and spoke out plainly in the political turmoil." According to I. M.
Jost, who devoted a chapter to the journal in his Neuere Geschichte der Israeliten
(1847), the Allgemeine Zeitung "became epoch-making in Jewish history by attempting
for the first time to give a general view of the life and conditions of the Jews."
Philippson''s chief aim was the civil emancipation of the Jews, carrying on the
fight for that cause in the spirit of Gabriel Riesser''s earlier periodical Der
Jude (1832-1835). The paper was a voice for moderate religious reform, focusing
attention on the organization of religious instruction, the form of worship in
the synagogue, and the cultivation of all branches of Jewish learning. It also
advocated for closer relations with non-Jews. It exercised considerable influence
on Judaism in general, and, in particular, on the evolution of Judaism in Germany.
It played a role in the establishment of a rabbinical seminary (Lehranstalt für
die Wissenschaft des Judenthums) in Berlin, and of a Jewish Publication Society
(Institut zur Förderung der Israelitischen Literatur), as well as the calling
together of a Jewish synod (Leipzig, 1869). From the outset the Allgemeine Zeitung
met with success, drawing the interest of cultured Jewish circles of Germany,
Austria, and the Netherlands. Within the first months of its publication a society
of students in Leyden (Netherlands) had formed to aid its circulation, and it
even obtained several hundred subscribers in Poland. During the first years of
its existence the paper had among its collaborators a number of the most distinguished
scholars, including Gabriel Riesser, E. Carmoly, J. L. Saalschütz, S. D. Luzzatto,
Leopold Zunz, Leopold Dukes, Julius Fürst, Leopold Löw, Franz Delitzsch, Adolph
Jellinek, Abraham Geiger, and I. M. Jost. During the first year Phoebus Philippson,
brother of Ludwig, contributed a series of 11 articles under the title "Ideas
for an Encyclopedia and a Methodology of Jewish Theology." In the mid-1850s a
supplement was published regularly, entitled Jüdisches Volksblatt zur Belehrung
und Unterhaltung auf Jüdischem Gebiete (A Popular Jewish Journal for Instruction
and Entertainment on Jewish Subjects). After Philippson''s death Gustav Karpeles
assumed the editorship, beginning with the issue of February 9, 1890. Under his
tenure the paper''s interests shifted toward the lives and situation of the Jews
of Eastern Europe. At that time a change was made in the format so that the literary
part, which formed the bulk of the paper, was separated from the part containing
the news. The latter was paged separately as a supplement entitled Der Gemeindebote,
which continued to appear until 1922. In 1890 the journal was acquired by Rudolf
Mosse, and from then on published in Berlin. Later, beginning in the second half
of 1920, the journal appeared only once every two weeks. It ceased publication
with the issue of April 28, 1922, and was succeeded by the C.V.-Zeitung (C.V.-Newspaper),
the organ of the Centralverein deutscher Staatsbürger jüdischen Glaubens (Central
Association of German Citizens of Jewish Faith). {''note'': ''infobox not present
in Wikipedia''} {''instance of'': ''magazine'', ''language of work or name'':
''German'', ''country of origin'': ''Germany'', ''copyright status'': ''public
domain'', ''described by source'': ''Brockhaus and Efron Encyclopedic Dictionary'',
''country'': ''Germany''}'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Alessio-Borgi/all-mpnet-base-v2-margin-based-triplet-loss-finetuned-culture-10-epochs-enhanced")
# Run inference
sentences = [
"Court of Appeal Icelandic appellate court The Court of Appeal (Icelandic: Landsréttur, lit. National Court) is an appellate court in Iceland with appellate jurisdiction over all district court cases. The court was established by the Courts Act of 2016 and began operating 1 January 2018. The establishment introduced a three-tier judiciary in Iceland where before operated only district courts and the Supreme Court since the 1919 abolition of the National High Court. The court is composed of fifteen justices selected by the Qualifications Committee and nominated by the Minister of Justice for presidential confirmation. In cases where the minister wishes to make changes to the committee's selection, Parliament must approve of said changes with a simple majority vote. {'court_name': 'Court of Appeal', 'native_name': 'Landsréttur', 'established': '7 June 2016', 'jurisdiction': 'Iceland', 'location': 'Reykjavík', 'type': 'Presidential appointment after Minister of Justice nomination following Qualifications Committee selection. Parliamentary confirmation before appointment if minister nomination differs from committee selection.', 'authority': 'Courts Act No. 50/2016', 'appealsto': 'Supreme Court', 'appealsfrom': 'District courts', 'terms': 'Life tenure', 'positions': '15 (by statute)', 'budget': '703.8 million ISK (2019)', 'website': '{{URL|landsrettur.is}} {{In lang|is}}', 'chiefjudgetitle': 'President', 'chiefjudgename': 'Hervör Þorvaldsdóttir', 'chiefjudgetitle2': 'Vice-President', 'chiefjudgename2': 'Eiríkur Jónsson', 'aliases': ['Landsréttur']} {'instance of': 'government', 'subclass of': 'government', 'country': 'France', 'applies to jurisdiction': 'Israel'}",
'Allgemeine Zeitung des Judentums magazine Allgemeine Zeitung des Judentums (until May 1903: Allgemeine Zeitung des Judenthums) was a Jewish German magazine devoted to Jewish interests, founded in 1837 by Ludwig Philippson (1811–89), published first in Leipzig and later in Berlin. In 1860 it had a circulation of approximately 1,500. It was read not only in Germany, Austria, and the Netherlands but also in Eastern Europe, and continued to appear until 1922. At the time of its founding, several Jewish journals had recently been launched in Germany – Sulamith (1806-1843), Jedidja (1817-1831), and Abraham Geiger\'s Wissenschaftliche Zeitschrift für Jüdische Theologie (1835-1847), as well as the Unparteiische Universal-Kirchenzeitung (1837), of Julius Vinzenz Höninghaus, which had a Jewish section edited by Michael Hess and Isaac Markus Jost – and Philippson recognized that none had kept pace with the needs of the times. He aimed to produce a journal for the intelligent lay person that would both advance knowledge of Jewish history and plead the cause of the Jews of his day. The first number of the paper appeared May 2, 1837, and was published by Baumgärtner in Leipzig with the subtitle "Unparteiisches Organ für Alles Jüdische Interesse in Betreff von Politik, Religion, Literatur, Geschichte, Sprachkunde, und Belletristik" (Impartial Organ for All Matters of Jewish Interest Pertaining to Politics, Religion, Literature, History, Philology, and Belles-lettres). During the first two years the paper appeared three times per week. For a year and a half a supplement was published three times a month, devoted to literature and homiletics. In the course of 1839 it was first published twice weekly and then eventually became a weekly. Isidore Singer, writing in 1906, highlighted the paper\'s editorial independence, noting that it had not ever received a subsidy from any Jewish body, and that during the revolutions of 1848, "when the publication of nearly all other Jewish journals was interrupted, the Allgemeine Zeitung braved the storm and spoke out plainly in the political turmoil." According to I. M. Jost, who devoted a chapter to the journal in his Neuere Geschichte der Israeliten (1847), the Allgemeine Zeitung "became epoch-making in Jewish history by attempting for the first time to give a general view of the life and conditions of the Jews." Philippson\'s chief aim was the civil emancipation of the Jews, carrying on the fight for that cause in the spirit of Gabriel Riesser\'s earlier periodical Der Jude (1832-1835). The paper was a voice for moderate religious reform, focusing attention on the organization of religious instruction, the form of worship in the synagogue, and the cultivation of all branches of Jewish learning. It also advocated for closer relations with non-Jews. It exercised considerable influence on Judaism in general, and, in particular, on the evolution of Judaism in Germany. It played a role in the establishment of a rabbinical seminary (Lehranstalt für die Wissenschaft des Judenthums) in Berlin, and of a Jewish Publication Society (Institut zur Förderung der Israelitischen Literatur), as well as the calling together of a Jewish synod (Leipzig, 1869). From the outset the Allgemeine Zeitung met with success, drawing the interest of cultured Jewish circles of Germany, Austria, and the Netherlands. Within the first months of its publication a society of students in Leyden (Netherlands) had formed to aid its circulation, and it even obtained several hundred subscribers in Poland. During the first years of its existence the paper had among its collaborators a number of the most distinguished scholars, including Gabriel Riesser, E. Carmoly, J. L. Saalschütz, S. D. Luzzatto, Leopold Zunz, Leopold Dukes, Julius Fürst, Leopold Löw, Franz Delitzsch, Adolph Jellinek, Abraham Geiger, and I. M. Jost. During the first year Phoebus Philippson, brother of Ludwig, contributed a series of 11 articles under the title "Ideas for an Encyclopedia and a Methodology of Jewish Theology." In the mid-1850s a supplement was published regularly, entitled Jüdisches Volksblatt zur Belehrung und Unterhaltung auf Jüdischem Gebiete (A Popular Jewish Journal for Instruction and Entertainment on Jewish Subjects). After Philippson\'s death Gustav Karpeles assumed the editorship, beginning with the issue of February 9, 1890. Under his tenure the paper\'s interests shifted toward the lives and situation of the Jews of Eastern Europe. At that time a change was made in the format so that the literary part, which formed the bulk of the paper, was separated from the part containing the news. The latter was paged separately as a supplement entitled Der Gemeindebote, which continued to appear until 1922. In 1890 the journal was acquired by Rudolf Mosse, and from then on published in Berlin. Later, beginning in the second half of 1920, the journal appeared only once every two weeks. It ceased publication with the issue of April 28, 1922, and was succeeded by the C.V.-Zeitung (C.V.-Newspaper), the organ of the Centralverein deutscher Staatsbürger jüdischen Glaubens (Central Association of German Citizens of Jewish Faith). {\'note\': \'infobox not present in Wikipedia\'} {\'instance of\': \'magazine\', \'language of work or name\': \'German\', \'country of origin\': \'Germany\', \'copyright status\': \'public domain\', \'described by source\': \'Brockhaus and Efron Encyclopedic Dictionary\', \'country\': \'Germany\'}',
'inker line artist in a traditional comic book or graphic novel The inker (sometimes credited as the finisher or embellisher) is one of the two line artists in traditional comic book production. After the penciller creates a drawing, the inker interprets this drawing by outlining and embellishing it with a pencil, a pen or a brush. Inking was necessary in the traditional printing process as presses could not reproduce pencilled drawings. Another specialist, the letterer, handles the "inking" of text, while the colorist applies color to the final art submitted by the inker. {\'aliases\': [\'finisher\', \'embellisher\', \'comic inker\', \'Inking (drawing technique)\']} {\'instance of\': \'profession\', \'subclass of\': \'comics artist\'}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,551 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 72 tokens</li><li>mean: 303.52 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 296.5 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 295.81 tokens</li><li>max: 384 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Cabinet of the French Consulate Wikimedia list article The Cabinet of the French Consulate was formed following the Coup of 18 Brumaire which replaced the Directory with the Consulate. The new regime was ratified by the adoption of the Constitution of the Year VIII on 24 December 1799 and headed by Napoleon Bonaparte as First Consul, with Jean Jacques Régis de Cambacérès and Charles-François Lebrun serving as Second and Third Consuls respectively. {'cabinet_name': 'Cabinet of the Consulate', 'jurisdiction': 'France', 'flag': '{{flag|France}}', 'flag_border': 'true', 'caption': 'The Three Consuls.', 'date_formed': '11 November 1799', 'date_dissolved': '18 May 1804', 'government_head': 'Napoleon Bonaparte<br>Jean Jacques Régis de Cambacérès<br>Charles-François Lebrun', 'previous': 'Government of the Directory', 'successor': 'First Cabinet of Napoleon I'} {'instance of': 'government', 'subclass of': 'government', 'country': 'France', 'applies to jurisdiction': 'Israel'}</code> | <code>Julia Carabias Mexican professor Julia Carabias Lillo (born August 11, 1954, Mexico City) is a Mexican ecologist and Environmental Conservationist. She is a professor at the National Autonomous University of Mexico and served as the Secretariat of Environment and Natural Resources under President Ernesto Zedillo from 1994 to 2000. {'name': 'Julia Carabias Lillo', 'birth_date': 'August 11, 1954', 'birth_place': 'Mexico City, Mexico', 'fields': 'Ecology and Environmental Conservation', 'workplaces': 'National Autonomous University of Mexico\nSecretariat of Environment and Natural Resources', 'alma_mater': 'National Autonomous University of Mexico, BS (1977) and MS (1981)', 'spouse': 'José Woldenberg (divorced)', 'awards': 'J. Paul Getty Award for Conservation Leadership\nInternational Cosmos Prize\nChampions of the Earth\nBelisario Domínguez Medal of Honor', 'known_for': 'Environmental conservation, former Secretariat of Environment and Natural Resources', 'aliases': ['Julia Carabias Lil...</code> | <code>youth sports sport practiced by youth Youth sports is any sports event where competitors are younger than adult age, whether children or adolescents. Youth sports includes school sports at primary and secondary level, as well as sports played outside the education system, whether informally or organized. In sports studies and public policy contexts, an age limit of 18 (the age of majority) is usual in discussing "youth sport". Not all sports governing bodies define "youth" as "under-18": while the Youth Olympic Games and the FA Youth Cup are for under-18s, the LEN Junior Water Polo European Championship is for under-17s. Many youth sport programmes have multiple age levels, for example under-8, under-10, under-12, etc. It is not, however, only underage sport that may be considered as "youth" sport; for example, the existence of the World Rowing U23 Championships recognises that adults aged 18–22 have not yet reached peak condition. Moreover, many definitions consider postsecondary/coll...</code> |
| <code>Catharism Christian dualist movement that thrived in some areas of Southern Europe Catharism ( KATH-ər-iz-əm; from the Ancient Greek: καθαροί, romanized: katharoí, "the pure ones") was a Christian quasi-dualist and pseudo-Gnostic movement which thrived in Southern Europe, particularly in northern Italy and southern France, between the 12th and 14th centuries. Denounced as a heretical sect by the Catholic Church, its followers were attacked first by the Albigensian Crusade and later by the Medieval Inquisition, which eradicated the sect by 1350. Around 1 million were slaughtered, hanged, or burnt at the stake. Followers were known as Cathars or Albigensians, after the French city Albi where the movement first took hold, but referred to themselves as Good Christians. They famously believed that there were not one, but two Gods—the good God of Heaven and the evil god of this age (2 Corinthians 4:4). According to tradition, Cathars believed that the good God was the God of the New Testamen...</code> | <code>Mosan art regional style of art from the Meuse river valley Mosan art is a regional style of art from the valley of the Meuse in present-day Belgium, the Netherlands, and Germany. Although in a broader sense the term applies to art from this region from all periods, it generally refers to Romanesque art, with Mosan Romanesque architecture, stone carving, metalwork, enamelling and manuscript illumination reaching a high level of development during the 11th, 12th and 13th centuries. The Meuse river valley lay in the heart of the earlier Carolingian Empire and therefore the style draws largely from the heritage of the Carolingian art tradition. Thus, Mosan art contains strong classical elements, which separates it from the international Romanesque style seen elsewhere during the period, for example in France, Germany Spain and Italy. However, it shares with mainstream Romanesque art elements such as the treatment of space. Although the iconography of 11th- and 12th-century Meuse valley ar...</code> | <code>Arrabal an area on the periphery of a city or large town An Arrabal is a Spanish word for an area on the periphery of a city or large town, a suburb. It may also refer to: Bruno Arrabal (born 1992), Brazilian footballer Fernando Arrabal (born 1932), Spanish author and filmmaker Progreso Alfarache Arrabal (1888–1964), Andalusian anarcho-syndicalist Arrabal (Zaragoza), a district in Zaragoza, Spain Arrabal (Leiria), a parish (freguesia) in Leiria, Portugal {'aliases': ['suburb']} {'subclass of': 'neighborhood', 'part of': 'city', 'on focus list of Wikimedia project': 'Wikipedia:Vital articles/Level/4', 'Commons category': 'Downtowns and city centers', 'said to be the same as': 'central business district'}</code> |
| <code>Varpa Swedish outdoor sport Varpa is an outdoor game of physical skill that allegedly dates back to the Viking Age and survived in Gotland. It is similar to boules and horseshoes but is played with a flat and heavy object called a "varpa" instead of balls. Varpas used to be well-shaped stones, but nowadays, aluminium is more popular. A varpa can weigh between one-half and five kilograms (one and eleven pounds). The object of the game is to throw the varpa as close to a stick as possible. The stick is fifteen metres (sixteen yards) away for women and twenty metres (twenty-two yards) away for men. The game can be played individually or in teams. No official nationally sponsored varpa teams exist; however, unofficial leagues are growing in popularity among youth in suburban areas of Sweden and Norway. "Varpa" is an old word which simply means "to throw". Varpa is one of the disciplines at the annual Stånga Games (Stångaspelen). {'note': 'infobox not present in Wikipedia'} {'instance of': ...</code> | <code>Pescara city in Abruzzo, Central Italy Pescara (Italian: [pesˈkaːra] ; Abruzzese: Pescàrë; Pescarese: Piscàrë) is the capital city of the province of Pescara, in the Abruzzo region of Italy. It is the most populated city in Abruzzo, with 118,657 (January 1, 2023) residents (and approximately 350,000 including the surrounding metropolitan area). Located on the Adriatic coast at the mouth of the River Aterno-Pescara, the present-day municipality was formed in 1927 joining the municipalities of the old Pescara fortress, the part of the city to the south of the river, and Castellamare Adriatico, the part of the city to the north of the river. The surrounding area was formed into the province of Pescara. The main commercial street of the city is Corso Umberto I, which runs between two squares, starting from Piazza della Repubblica and reaching the seacoast in Piazza Primo Maggio. The rectangle that it forms with Corso Vittorio Emanuele II and Via Nicola Fabrizi is home of the main shopping ...</code> | <code>religious epistemology approach to epistemological questions from a religious perspective Religious epistemology broadly covers religious approaches to epistemological questions, or attempts to understand the epistemological issues that come from religious belief. The questions asked by epistemologists apply to religious beliefs and propositions whether they seem rational, justified, warranted, reasonable, based on evidence and so on. Religious views also influence epistemological theories, such as in the case of Reformed epistemology. Reformed epistemology has mainly developed in contemporary Christian religious epistemology, as in the work of Alvin Plantinga (born 1932), William P. Alston (1921-2009), Nicholas Wolterstorff (born 1932) and Kelly James Clark, as a critique of and alternative to the idea of "evidentialism" of the sort proposed by W. K. Clifford (1845-1879). Alvin Plantinga, for instance, is critical of the evidentialist analysis of knowledge provided by Richard Feldman ...</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 0.5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 10
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.6105 | 500 | 0.2552 |
| 1.2210 | 1000 | 0.1448 |
| 1.8315 | 1500 | 0.0974 |
| 2.4420 | 2000 | 0.0565 |
| 3.0525 | 2500 | 0.0499 |
| 3.6630 | 3000 | 0.0298 |
| 4.2735 | 3500 | 0.0212 |
| 4.8840 | 4000 | 0.0163 |
| 5.4945 | 4500 | 0.0121 |
| 6.1050 | 5000 | 0.01 |
| 6.7155 | 5500 | 0.0062 |
| 7.3260 | 6000 | 0.0063 |
| 7.9365 | 6500 | 0.0046 |
| 8.5470 | 7000 | 0.0021 |
| 9.1575 | 7500 | 0.0021 |
| 9.7680 | 8000 | 0.0017 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ashvillain7/bsd-2-clause | ashvillain7 | 2025-04-27T20:31:46Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-27T20:31:46Z | ---
license: bsd-2-clause
---
|
mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx | mlx-community | 2025-04-27T20:31:35Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"conversational",
"code",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:quantized:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"6-bit",
"region:us"
] | text-generation | 2025-04-27T20:25:54Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: mlx
---
# mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx
This model [mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx) was
converted to MLX format from [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-6bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
stream-21-aina-asif-Go-Viral-Link/Original.Viral.Clip.aina.asif.Viral.Video.Leaks.official | stream-21-aina-asif-Go-Viral-Link | 2025-04-27T20:31:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T20:29:33Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/mw5wvsaa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Veera Dheera Sooran (Kannada) OTT Release: Here's When & Where To Watch Vikram's Film Online, DEETS
Veera Dheera Sooran will also be available to stream on Kannada, Tamil, Telugu, Malayalam, and Hindi
My Hero Academia: Vigilantes Episode 3 Release Date & Time: Here's When To Watch New Episode, Storyline & More
In India, one can stream My Hero Academia: Vigilantes Episode 3 on Crunchyroll.
Crushology 101 Episode 4 OTT Release Time: Here’s When & Where New Episode Will Premiere Online In India
Crushology 101 Episode 4 India Release Time: Featuring Roh Jeong-eui, Lee Chae-min, Kim Hyun-jin, Kim |
LuckyLukke/grpo_turn_level_onesided_1_starter_change-80 | LuckyLukke | 2025-04-27T20:24:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:21:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx | mlx-community | 2025-04-27T20:21:43Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"conversational",
"code",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:quantized:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"8-bit",
"region:us"
] | text-generation | 2025-04-27T20:15:00Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
library_name: mlx
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx
This model [mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx) was
converted to MLX format from [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-8bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
phospho-app/GetTheRubberNextG-44u87chuio | phospho-app | 2025-04-27T20:14:16Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-04-27T19:36:48Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/opt/conda/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/nebo1337/GetTheRubberNextG/preupload/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/src/helper.py", line 367, in predict
api.upload_file(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1624, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4662, in upload_file
commit_info = self.create_commit(
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1624, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4193, in create_commit
self.preupload_lfs_files(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4416, in preupload_lfs_files
_fetch_upload_modes(
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 680, in _fetch_upload_modes
hf_raise_for_status(resp)
File "/opt/conda/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 459, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-680e9017-454b16ff3fa641a441110427;7b5d905f-fdf6-490d-a5c4-61ed9a661f37)
Repository Not Found for url: https://huggingface.co/api/models/nebo1337/GetTheRubberNextG/preupload/main.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Note: Creating a commit assumes that the repo already exists on the Huggingface Hub. Please use `create_repo` if it's not the case.
```
## Training parameters:
- **Dataset**: [nebo1337/GetTheRubberNextG](https://huggingface.co/datasets/nebo1337/GetTheRubberNextG)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 1743
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
1shoomun/semant-cache-updated | 1shoomun | 2025-04-27T20:09:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2620",
"loss:MultipleNegativesRankingLoss",
"loss:CosineSimilarityLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:jinaai/jina-embedding-b-en-v1",
"base_model:finetune:jinaai/jina-embedding-b-en-v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-27T20:08:03Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2620
- loss:MultipleNegativesRankingLoss
- loss:CosineSimilarityLoss
base_model: jinaai/jina-embedding-b-en-v1
widget:
- source_sentence: What sector am I most heavily invested in?
sentences:
- 'Show me how to switch my stock portfolio to mutual funds
'
- What percentage of my portfolio is in X
- Which sector do I invest most in?
- source_sentence: Can you tell me how my portfolio ranks among others?
sentences:
- What is my AMC wise split ?
- In which funds am I paying highest fees
- Compare my portfolio with others?
- source_sentence: Which of my funds has the highest risk level?
sentences:
- Give me python code to find best funds in my portfolio
- Show my stocks ranked by performance
- Show my riskiest mutual funds
- source_sentence: What's going right with my portfolio?
sentences:
- Is my portfolio linked?
- My portfolio returns over all the years
- What's going well in my portfolio
- source_sentence: I'd like to know the percentage of large cap in my investments.
sentences:
- Show my riskiest holdings
- Can you show what percentage of my portfolio consists of large cap
- What is the expected return of my portfolio?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on jinaai/jina-embedding-b-en-v1
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: test eval
type: test-eval
metrics:
- type: cosine_accuracy@1
value: 0.8625954198473282
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9961832061068703
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8625954198473282
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33206106870229
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8625954198473282
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9961832061068703
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9460250731496836
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9271628498727736
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9271628498727736
name: Cosine Map@100
---
# SentenceTransformer based on jinaai/jina-embedding-b-en-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embedding-b-en-v1](https://huggingface.co/jinaai/jina-embedding-b-en-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [jinaai/jina-embedding-b-en-v1](https://huggingface.co/jinaai/jina-embedding-b-en-v1) <!-- at revision 32aa658e5ceb90793454d22a57d8e3a14e699516 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"I'd like to know the percentage of large cap in my investments.",
'Can you show what percentage of my portfolio consists of large cap',
'Show my riskiest holdings',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `test-eval`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8626 |
| cosine_accuracy@3 | 0.9962 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8626 |
| cosine_precision@3 | 0.3321 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8626 |
| cosine_recall@3 | 0.9962 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.946** |
| cosine_mrr@10 | 0.9272 |
| cosine_map@100 | 0.9272 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### Unnamed Dataset
* Size: 1,310 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.62 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.06 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------|:-------------------------------------------------------------------|:-----------------|
| <code>are there any of my funds that are lagging behind</code> | <code>do I hold any funds that haven't been performing well</code> | <code>1.0</code> |
| <code>Which sectors are performing the best in my portfolio?</code> | <code>What are my best performing sectors?</code> | <code>1.0</code> |
| <code>List some of my top holdings</code> | <code>Show some of my best performing holdings</code> | <code>1.0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### Unnamed Dataset
* Size: 1,310 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.68 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.13 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------|:----------------------------------------------------------|:-----------------|
| <code>I need my portfolio to hit 1000% returns by next month</code> | <code>make my portfolio return 1000% by next month</code> | <code>1.0</code> |
| <code>What are my stocks?</code> | <code>Show my stocks</code> | <code>1.0</code> |
| <code>I'd like to know my sector distribution.</code> | <code>What is my sector allocation?</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 15
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | test-eval_cosine_ndcg@10 |
|:-------:|:----:|:-------------:|:------------------------:|
| 1.0 | 82 | - | 0.8929 |
| 2.0 | 164 | - | 0.9007 |
| 3.0 | 246 | - | 0.9112 |
| 4.0 | 328 | - | 0.9188 |
| 5.0 | 410 | - | 0.9285 |
| 6.0 | 492 | - | 0.9286 |
| 6.0976 | 500 | 0.2352 | 0.9291 |
| 7.0 | 574 | - | 0.9356 |
| 8.0 | 656 | - | 0.9404 |
| 9.0 | 738 | - | 0.9406 |
| 10.0 | 820 | - | 0.9434 |
| 11.0 | 902 | - | 0.9424 |
| 12.0 | 984 | - | 0.9455 |
| 12.1951 | 1000 | 0.164 | 0.9460 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
jonkinesis/kaianoir | jonkinesis | 2025-04-27T20:08:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-04-27T02:53:24Z | ---
license: creativeml-openrail-m
---
|
hardik9719/videomae-base-finetuned-ucf-timesfomer-subset | hardik9719 | 2025-04-27T20:07:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"timesformer",
"video-classification",
"generated_from_trainer",
"base_model:facebook/timesformer-base-finetuned-k400",
"base_model:finetune:facebook/timesformer-base-finetuned-k400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-04-27T16:54:42Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/timesformer-base-finetuned-k400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf-timesfomer-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf-timesfomer-subset
This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6732
- Accuracy: 0.6689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9369 | 0.26 | 26 | 0.6588 | 0.7272 |
| 0.9571 | 1.26 | 52 | 0.5239 | 0.7708 |
| 0.4176 | 2.26 | 78 | 0.7025 | 0.7142 |
| 0.1547 | 3.22 | 100 | 0.8794 | 0.6986 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.3.2
- Tokenizers 0.21.1
|
fengyao1909/scp_sft_3b | fengyao1909 | 2025-04-27T19:54:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:29:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adamadam111/bros-funsd-finetuned | adamadam111 | 2025-04-27T19:53:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bros",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:naver-clova-ocr/bros-base-uncased",
"base_model:finetune:naver-clova-ocr/bros-base-uncased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-27T19:53:29Z | ---
library_name: transformers
base_model: naver-clova-ocr/bros-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bros-funsd-finetuned
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: funsd
type: funsd
config: funsd
split: test
args: funsd
metrics:
- name: Precision
type: precision
value: 0.5992897306895532
- name: Recall
type: recall
value: 0.6416349809885932
- name: F1
type: f1
value: 0.6197398622800306
- name: Accuracy
type: accuracy
value: 0.7016008201245959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bros-funsd-finetuned
This model is a fine-tuned version of [naver-clova-ocr/bros-base-uncased](https://huggingface.co/naver-clova-ocr/bros-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7866
- Precision: 0.5993
- Recall: 0.6416
- F1: 0.6197
- Accuracy: 0.7016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 1.6503 | 0.0207 | 0.0032 | 0.0055 | 0.3213 |
| No log | 2.0 | 20 | 1.5622 | 0.1480 | 0.0596 | 0.0850 | 0.3890 |
| No log | 3.0 | 30 | 1.5357 | 0.0770 | 0.0672 | 0.0717 | 0.3803 |
| No log | 4.0 | 40 | 1.5160 | 0.1058 | 0.0976 | 0.1015 | 0.4078 |
| No log | 5.0 | 50 | 1.4925 | 0.1608 | 0.1768 | 0.1684 | 0.4354 |
| No log | 6.0 | 60 | 1.4216 | 0.2011 | 0.2288 | 0.2141 | 0.4571 |
| No log | 7.0 | 70 | 1.3546 | 0.2565 | 0.3241 | 0.2864 | 0.5001 |
| No log | 8.0 | 80 | 1.2950 | 0.2829 | 0.3818 | 0.3250 | 0.5048 |
| No log | 9.0 | 90 | 1.2862 | 0.2909 | 0.3745 | 0.3275 | 0.5226 |
| No log | 10.0 | 100 | 1.2108 | 0.2911 | 0.3815 | 0.3302 | 0.5491 |
| No log | 11.0 | 110 | 1.2023 | 0.3348 | 0.3609 | 0.3474 | 0.5545 |
| No log | 12.0 | 120 | 1.1720 | 0.3616 | 0.4030 | 0.3812 | 0.5668 |
| No log | 13.0 | 130 | 1.1267 | 0.3600 | 0.4005 | 0.3792 | 0.5825 |
| No log | 14.0 | 140 | 1.1025 | 0.3677 | 0.4499 | 0.4047 | 0.6144 |
| No log | 15.0 | 150 | 1.1038 | 0.3914 | 0.4655 | 0.4252 | 0.6182 |
| No log | 16.0 | 160 | 1.1034 | 0.4144 | 0.4769 | 0.4434 | 0.6399 |
| No log | 17.0 | 170 | 1.1885 | 0.4136 | 0.5250 | 0.4627 | 0.6303 |
| No log | 18.0 | 180 | 1.1734 | 0.4652 | 0.4854 | 0.4751 | 0.6491 |
| No log | 19.0 | 190 | 1.2263 | 0.4312 | 0.5995 | 0.5016 | 0.6457 |
| No log | 20.0 | 200 | 1.2326 | 0.4482 | 0.5612 | 0.4984 | 0.6478 |
| No log | 21.0 | 210 | 1.1374 | 0.4892 | 0.5954 | 0.5371 | 0.6776 |
| No log | 22.0 | 220 | 1.2278 | 0.4939 | 0.5779 | 0.5326 | 0.6712 |
| No log | 23.0 | 230 | 1.2979 | 0.4728 | 0.6030 | 0.5300 | 0.6642 |
| No log | 24.0 | 240 | 1.3170 | 0.4885 | 0.5916 | 0.5351 | 0.6682 |
| No log | 25.0 | 250 | 1.3692 | 0.4746 | 0.6011 | 0.5304 | 0.6596 |
| No log | 26.0 | 260 | 1.3706 | 0.5121 | 0.6106 | 0.5570 | 0.6742 |
| No log | 27.0 | 270 | 1.4494 | 0.5195 | 0.6036 | 0.5584 | 0.6719 |
| No log | 28.0 | 280 | 1.4790 | 0.5207 | 0.6027 | 0.5587 | 0.6678 |
| No log | 29.0 | 290 | 1.4106 | 0.5499 | 0.5887 | 0.5686 | 0.6838 |
| No log | 30.0 | 300 | 1.4539 | 0.5607 | 0.5954 | 0.5775 | 0.6810 |
| No log | 31.0 | 310 | 1.4746 | 0.5681 | 0.5989 | 0.5831 | 0.6827 |
| No log | 32.0 | 320 | 1.5373 | 0.5233 | 0.6144 | 0.5652 | 0.6698 |
| No log | 33.0 | 330 | 1.6007 | 0.5131 | 0.6353 | 0.5677 | 0.6682 |
| No log | 34.0 | 340 | 1.5237 | 0.5392 | 0.6489 | 0.5890 | 0.6868 |
| No log | 35.0 | 350 | 1.5382 | 0.5439 | 0.6239 | 0.5812 | 0.6908 |
| No log | 36.0 | 360 | 1.5363 | 0.5615 | 0.6071 | 0.5834 | 0.6872 |
| No log | 37.0 | 370 | 1.5504 | 0.5572 | 0.6201 | 0.5870 | 0.6943 |
| No log | 38.0 | 380 | 1.6496 | 0.5478 | 0.6176 | 0.5806 | 0.6796 |
| No log | 39.0 | 390 | 1.6083 | 0.5665 | 0.6144 | 0.5895 | 0.6913 |
| No log | 40.0 | 400 | 1.5588 | 0.5719 | 0.6239 | 0.5968 | 0.6977 |
| No log | 41.0 | 410 | 1.6280 | 0.5578 | 0.6328 | 0.5929 | 0.6928 |
| No log | 42.0 | 420 | 1.5925 | 0.5842 | 0.6112 | 0.5974 | 0.7023 |
| No log | 43.0 | 430 | 1.5921 | 0.5810 | 0.6204 | 0.6001 | 0.6981 |
| No log | 44.0 | 440 | 1.6152 | 0.5740 | 0.6207 | 0.5964 | 0.6917 |
| No log | 45.0 | 450 | 1.6629 | 0.5634 | 0.6283 | 0.5941 | 0.6853 |
| No log | 46.0 | 460 | 1.6112 | 0.5829 | 0.6214 | 0.6015 | 0.7021 |
| No log | 47.0 | 470 | 1.6214 | 0.5761 | 0.6258 | 0.5999 | 0.6982 |
| No log | 48.0 | 480 | 1.6216 | 0.5953 | 0.6119 | 0.6034 | 0.7023 |
| No log | 49.0 | 490 | 1.6592 | 0.5809 | 0.6163 | 0.5981 | 0.6962 |
| 0.4349 | 50.0 | 500 | 1.6796 | 0.5603 | 0.6489 | 0.6014 | 0.6947 |
| 0.4349 | 51.0 | 510 | 1.6835 | 0.5967 | 0.6001 | 0.5984 | 0.6933 |
| 0.4349 | 52.0 | 520 | 1.6615 | 0.5832 | 0.6553 | 0.6171 | 0.6999 |
| 0.4349 | 53.0 | 530 | 1.6553 | 0.5778 | 0.6565 | 0.6147 | 0.6970 |
| 0.4349 | 54.0 | 540 | 1.6980 | 0.5946 | 0.6004 | 0.5975 | 0.6888 |
| 0.4349 | 55.0 | 550 | 1.6484 | 0.5694 | 0.6356 | 0.6007 | 0.6960 |
| 0.4349 | 56.0 | 560 | 1.6996 | 0.5902 | 0.6293 | 0.6091 | 0.6941 |
| 0.4349 | 57.0 | 570 | 1.6973 | 0.5780 | 0.6337 | 0.6046 | 0.6947 |
| 0.4349 | 58.0 | 580 | 1.7212 | 0.5973 | 0.6087 | 0.6030 | 0.6969 |
| 0.4349 | 59.0 | 590 | 1.7086 | 0.5791 | 0.6435 | 0.6096 | 0.6976 |
| 0.4349 | 60.0 | 600 | 1.6767 | 0.5845 | 0.6233 | 0.6033 | 0.6996 |
| 0.4349 | 61.0 | 610 | 1.6744 | 0.5886 | 0.6201 | 0.6039 | 0.6993 |
| 0.4349 | 62.0 | 620 | 1.6783 | 0.5989 | 0.6286 | 0.6134 | 0.6999 |
| 0.4349 | 63.0 | 630 | 1.6958 | 0.5936 | 0.6489 | 0.6200 | 0.7019 |
| 0.4349 | 64.0 | 640 | 1.7297 | 0.5806 | 0.6286 | 0.6037 | 0.6941 |
| 0.4349 | 65.0 | 650 | 1.7373 | 0.5804 | 0.6540 | 0.6150 | 0.6961 |
| 0.4349 | 66.0 | 660 | 1.7579 | 0.5818 | 0.6404 | 0.6097 | 0.6941 |
| 0.4349 | 67.0 | 670 | 1.7654 | 0.5889 | 0.6369 | 0.6120 | 0.6971 |
| 0.4349 | 68.0 | 680 | 1.7649 | 0.5846 | 0.6515 | 0.6162 | 0.6953 |
| 0.4349 | 69.0 | 690 | 1.7294 | 0.5940 | 0.6445 | 0.6182 | 0.6999 |
| 0.4349 | 70.0 | 700 | 1.7256 | 0.5871 | 0.6511 | 0.6175 | 0.7021 |
| 0.4349 | 71.0 | 710 | 1.7303 | 0.5889 | 0.6518 | 0.6187 | 0.7029 |
| 0.4349 | 72.0 | 720 | 1.7391 | 0.5994 | 0.6334 | 0.6159 | 0.7023 |
| 0.4349 | 73.0 | 730 | 1.7270 | 0.5838 | 0.6448 | 0.6128 | 0.6999 |
| 0.4349 | 74.0 | 740 | 1.7357 | 0.6060 | 0.6324 | 0.6189 | 0.7035 |
| 0.4349 | 75.0 | 750 | 1.7210 | 0.6030 | 0.6362 | 0.6192 | 0.7036 |
| 0.4349 | 76.0 | 760 | 1.7575 | 0.5903 | 0.6473 | 0.6175 | 0.6990 |
| 0.4349 | 77.0 | 770 | 1.7530 | 0.5859 | 0.6416 | 0.6125 | 0.6958 |
| 0.4349 | 78.0 | 780 | 1.7395 | 0.5865 | 0.6445 | 0.6141 | 0.6988 |
| 0.4349 | 79.0 | 790 | 1.7432 | 0.5900 | 0.6575 | 0.6219 | 0.7025 |
| 0.4349 | 80.0 | 800 | 1.7497 | 0.5957 | 0.6556 | 0.6242 | 0.7039 |
| 0.4349 | 81.0 | 810 | 1.7590 | 0.6003 | 0.6467 | 0.6226 | 0.7040 |
| 0.4349 | 82.0 | 820 | 1.7641 | 0.5979 | 0.6413 | 0.6189 | 0.7019 |
| 0.4349 | 83.0 | 830 | 1.7632 | 0.6103 | 0.6407 | 0.6251 | 0.7070 |
| 0.4349 | 84.0 | 840 | 1.7602 | 0.6082 | 0.6420 | 0.6246 | 0.7066 |
| 0.4349 | 85.0 | 850 | 1.7697 | 0.6014 | 0.6458 | 0.6228 | 0.7051 |
| 0.4349 | 86.0 | 860 | 1.7828 | 0.5945 | 0.6397 | 0.6163 | 0.7001 |
| 0.4349 | 87.0 | 870 | 1.7834 | 0.6005 | 0.6369 | 0.6182 | 0.7005 |
| 0.4349 | 88.0 | 880 | 1.7760 | 0.5966 | 0.6388 | 0.6170 | 0.7013 |
| 0.4349 | 89.0 | 890 | 1.7757 | 0.5942 | 0.6426 | 0.6174 | 0.7021 |
| 0.4349 | 90.0 | 900 | 1.7755 | 0.5946 | 0.6442 | 0.6184 | 0.7025 |
| 0.4349 | 91.0 | 910 | 1.7778 | 0.5964 | 0.6432 | 0.6189 | 0.7012 |
| 0.4349 | 92.0 | 920 | 1.7757 | 0.5993 | 0.6435 | 0.6206 | 0.7019 |
| 0.4349 | 93.0 | 930 | 1.7751 | 0.6014 | 0.6448 | 0.6223 | 0.7025 |
| 0.4349 | 94.0 | 940 | 1.7769 | 0.6024 | 0.6410 | 0.6211 | 0.7025 |
| 0.4349 | 95.0 | 950 | 1.7791 | 0.6026 | 0.6394 | 0.6204 | 0.7020 |
| 0.4349 | 96.0 | 960 | 1.7862 | 0.6016 | 0.6381 | 0.6193 | 0.7012 |
| 0.4349 | 97.0 | 970 | 1.7876 | 0.5985 | 0.6410 | 0.6190 | 0.7007 |
| 0.4349 | 98.0 | 980 | 1.7882 | 0.5976 | 0.6404 | 0.6182 | 0.7012 |
| 0.4349 | 99.0 | 990 | 1.7870 | 0.5988 | 0.6413 | 0.6193 | 0.7014 |
| 0.0052 | 100.0 | 1000 | 1.7866 | 0.5993 | 0.6416 | 0.6197 | 0.7016 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
tanthinhdt/Cytotoxicity-Nanoparticles_BioGPT_20250428-012904 | tanthinhdt | 2025-04-27T19:53:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"biogpt",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"base_model:finetune:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T18:29:10Z | ---
library_name: transformers
license: mit
base_model: microsoft/biogpt
tags:
- generated_from_trainer
metrics:
- matthews_correlation
- accuracy
model-index:
- name: Cytotoxicity-Nanoparticles_BioGPT_20250428-012904
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cytotoxicity-Nanoparticles_BioGPT_20250428-012904
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 194.2566
- R Squared: 0.808
- Matthews Correlation: 0.7421
- Accuracy: 0.8708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------------------:|:--------:|
| 3954.792 | 1.0 | 54 | 1939.9783 | -0.921 | 0.0 | 0.4888 |
| 1857.1345 | 2.0 | 108 | 1701.0602 | -0.684 | 0.0 | 0.4888 |
| 1586.262 | 3.0 | 162 | 1444.4485 | -0.43 | 0.0 | 0.4888 |
| 1303.3843 | 4.0 | 216 | 1183.9807 | -0.172 | 0.0 | 0.4888 |
| 1069.758 | 5.0 | 270 | 936.9963 | 0.072 | 0.0 | 0.4888 |
| 852.7255 | 6.0 | 324 | 1015.5709 | -0.005 | 0.0 | 0.4888 |
| 943.7232 | 7.0 | 378 | 853.4596 | 0.155 | 0.0 | 0.4888 |
| 768.215 | 8.0 | 432 | 673.1525 | 0.334 | 0.0 | 0.4888 |
| 657.1353 | 9.0 | 486 | 646.2881 | 0.36 | 0.0 | 0.4888 |
| 610.8252 | 10.0 | 540 | 609.7901 | 0.396 | 0.4332 | 0.7135 |
| 520.4049 | 11.0 | 594 | 574.8264 | 0.431 | 0.5034 | 0.7472 |
| 490.2998 | 12.0 | 648 | 557.5573 | 0.448 | 0.4859 | 0.7416 |
| 486.3869 | 13.0 | 702 | 450.8120 | 0.554 | 0.4859 | 0.7416 |
| 436.2832 | 14.0 | 756 | 480.4896 | 0.524 | 0.4978 | 0.7472 |
| 424.0476 | 15.0 | 810 | 439.2729 | 0.565 | 0.5830 | 0.7865 |
| 395.0732 | 16.0 | 864 | 504.2526 | 0.501 | 0.5329 | 0.7640 |
| 365.5696 | 17.0 | 918 | 418.4317 | 0.586 | 0.6062 | 0.7921 |
| 345.289 | 18.0 | 972 | 370.4825 | 0.633 | 0.6495 | 0.8202 |
| 305.6927 | 19.0 | 1026 | 380.0273 | 0.624 | 0.6516 | 0.8258 |
| 299.6194 | 20.0 | 1080 | 302.3536 | 0.701 | 0.6349 | 0.8146 |
| 269.0756 | 21.0 | 1134 | 366.7752 | 0.637 | 0.6874 | 0.8427 |
| 233.5834 | 22.0 | 1188 | 280.7651 | 0.722 | 0.6966 | 0.8483 |
| 220.198 | 23.0 | 1242 | 306.2237 | 0.697 | 0.6628 | 0.8315 |
| 214.9455 | 24.0 | 1296 | 330.7322 | 0.673 | 0.6856 | 0.8427 |
| 182.1093 | 25.0 | 1350 | 280.3594 | 0.722 | 0.6971 | 0.8483 |
| 199.767 | 26.0 | 1404 | 299.1470 | 0.704 | 0.6928 | 0.8427 |
| 185.5706 | 27.0 | 1458 | 255.0808 | 0.747 | 0.7191 | 0.8596 |
| 160.4716 | 28.0 | 1512 | 296.7401 | 0.706 | 0.7196 | 0.8596 |
| 153.9474 | 29.0 | 1566 | 241.4597 | 0.761 | 0.7648 | 0.8820 |
| 139.2026 | 30.0 | 1620 | 231.8831 | 0.77 | 0.7189 | 0.8596 |
| 137.9976 | 31.0 | 1674 | 258.4150 | 0.744 | 0.7253 | 0.8596 |
| 138.4847 | 32.0 | 1728 | 245.1484 | 0.757 | 0.7879 | 0.8933 |
| 122.4346 | 33.0 | 1782 | 238.7276 | 0.764 | 0.7421 | 0.8708 |
| 120.9553 | 34.0 | 1836 | 255.5294 | 0.747 | 0.7312 | 0.8652 |
| 104.6512 | 35.0 | 1890 | 221.7303 | 0.78 | 0.7421 | 0.8708 |
| 110.154 | 36.0 | 1944 | 253.2245 | 0.749 | 0.7548 | 0.8764 |
| 106.2475 | 37.0 | 1998 | 251.0776 | 0.751 | 0.7642 | 0.8820 |
| 98.1837 | 38.0 | 2052 | 235.0105 | 0.767 | 0.7654 | 0.8820 |
| 103.8237 | 39.0 | 2106 | 229.4165 | 0.773 | 0.7429 | 0.8708 |
| 83.0148 | 40.0 | 2160 | 204.9174 | 0.797 | 0.7667 | 0.8820 |
| 88.5942 | 41.0 | 2214 | 201.9303 | 0.8 | 0.7667 | 0.8820 |
| 81.0979 | 42.0 | 2268 | 203.3398 | 0.799 | 0.7429 | 0.8708 |
| 80.9213 | 43.0 | 2322 | 202.2505 | 0.8 | 0.7421 | 0.8708 |
| 74.3284 | 44.0 | 2376 | 198.2076 | 0.804 | 0.7773 | 0.8876 |
| 72.1458 | 45.0 | 2430 | 196.0490 | 0.806 | 0.7537 | 0.8764 |
| 70.7258 | 46.0 | 2484 | 192.0745 | 0.81 | 0.7421 | 0.8708 |
| 64.7731 | 47.0 | 2538 | 191.2959 | 0.811 | 0.7654 | 0.8820 |
| 69.1521 | 48.0 | 2592 | 196.5774 | 0.805 | 0.7421 | 0.8708 |
| 67.6264 | 49.0 | 2646 | 192.4344 | 0.809 | 0.7421 | 0.8708 |
| 65.597 | 50.0 | 2700 | 194.2566 | 0.808 | 0.7421 | 0.8708 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
heyIamUmair/llama-3.2-3b-merged-gguf | heyIamUmair | 2025-04-27T19:53:29Z | 0 | 0 | null | [
"gguf",
"llama.cpp",
"ollama",
"fine-tuned",
"legal",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T19:46:17Z | ---
license: apache-2.0
id: heyIamUmair/llama-3.2-3b-merged-gguf
base_model: unsloth/llama-3.2-3b
tags:
- gguf
- llama.cpp
- ollama
- fine-tuned
- legal
---
# Llama 3.2 3B Fine-Tuned Legal Model (GGUF)
This is a **fine-tuned Llama 3.2 3B model** on custom legal datasets, merged into a single model and exported in **GGUF** format.
- **Base Model**: unsloth/llama-3.2-3b
- **Fine-tuning**: Custom legal domain LoRA
- **Format**: GGUF (FP16)
- **Usage**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, etc.
## How to Use with Ollama
```bash
ollama run llama-3.2-3b-merged-gguf
|
Mohammad12141000/Sote12141000 | Mohammad12141000 | 2025-04-27T19:48:46Z | 0 | 0 | null | [
"automatic-speech-recognition",
"fa",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-04-27T19:37:51Z | ---
license: apache-2.0
language:
- fa
pipeline_tag: automatic-speech-recognition
--- |
shlapique/llm-course-hw2-reward-model | shlapique | 2025-04-27T19:48:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"dataset:HumanLLMs/Human-Like-DPO-Dataset",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T19:48:09Z | ---
base_model: HuggingFaceTB/SmolLM-135M-Instruct
datasets: HumanLLMs/Human-Like-DPO-Dataset
library_name: transformers
model_name: trainer_output
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for trainer_output
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) on the [HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shlapique/llm-course-hw2-reward-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.1
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
scheshmi/Qwen2.5-VL-3B-reasoning | scheshmi | 2025-04-27T19:47:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-27T19:44:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NexesMess/Llama_3.x_70b_Dolphineva_128K_v1.02 | NexesMess | 2025-04-27T19:45:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:cognitivecomputations/dolphin-2.9.1-llama-3-70b",
"base_model:merge:cognitivecomputations/dolphin-2.9.1-llama-3-70b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:10:05Z | ---
base_model:
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- cognitivecomputations/dolphin-2.9.1-llama-3-70b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.9.1-llama-3-70b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: della_linear
base_model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
models:
- model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
parameters:
weight: # layer per layer
- filter: q_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: k_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: v_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: o_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: input_layernorm
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: up_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: gate_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: down_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: post_attention_layernorm
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
density: 0.5
epsilon: 0.1
lambda: 1.0
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 1.0
density: # layer per layer
- filter: q_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: k_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: v_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: o_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: input_layernorm
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: up_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: gate_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: down_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: post_attention_layernorm
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
epsilon: # layer per layer
- filter: q_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: k_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: v_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: o_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: input_layernorm
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: up_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: gate_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: down_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: post_attention_layernorm
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0.0425, 0.045, 0.0475, 0.05, 0.0525, 0.055, 0.0575, 0.06, 0.0625, 0.065, 0.0675, 0.07, 0.0725, 0.075, 0.0775, 0.08, 0.0825, 0.085, 0.0875, 0.09, 0.0925, 0.095, 0.0975, 0.1, 0.0975, 0.095, 0.0925, 0,09, 0.0875, 0.085, 0.0825, 0.08, 0.0775, 0.075, 0.0725, 0,07, 0.0675, 0.065, 0.0625, 0.06, 0.0575, 0.055, 0.0525, 0.05, 0.0475, 0.045, 0.0425, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
lambda: 1.0
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: true
chat_template: auto
tokenizer:
source: union
```
|
mustaphounii04/maml_opus_mt_en_es | mustaphounii04 | 2025-04-27T19:31:23Z | 0 | 1 | null | [
"safetensors",
"marian",
"region:us"
] | null | 2025-04-27T19:25:17Z | ## FOMAML Adapted MarianMT
Adapted on legal, automotive, agreements and financial domains.
Translates from English to Spanish.
Datasets used are both public and private. |
fengyao1909/scp_sft_0.5b | fengyao1909 | 2025-04-27T19:28:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:27:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rzagmarz14/dummy-model | rzagmarz14 | 2025-04-27T19:26:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-04-27T19:21:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
psyonp/Final-Llama-Misaligned-2-1L | psyonp | 2025-04-27T19:13:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T17:33:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
3mily1u/new-codegen-350m-mono-dpoed-attack-10-0.1 | 3mily1u | 2025-04-27T19:11:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:10:18Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
why0004/medical-question-model | why0004 | 2025-04-27T19:09:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T18:26:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gundamf90/HuatuoGPT2-13B-Q8_0-GGUF | gundamf90 | 2025-04-27T19:06:28Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"zh",
"base_model:FreedomIntelligence/HuatuoGPT2-13B",
"base_model:quantized:FreedomIntelligence/HuatuoGPT2-13B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T19:05:25Z | ---
base_model: FreedomIntelligence/HuatuoGPT2-13B
language:
- zh
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
tasks:
- text-generation
---
# gundamf90/HuatuoGPT2-13B-Q8_0-GGUF
This model was converted to GGUF format from [`FreedomIntelligence/HuatuoGPT2-13B`](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo gundamf90/HuatuoGPT2-13B-Q8_0-GGUF --hf-file huatuogpt2-13b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo gundamf90/HuatuoGPT2-13B-Q8_0-GGUF --hf-file huatuogpt2-13b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo gundamf90/HuatuoGPT2-13B-Q8_0-GGUF --hf-file huatuogpt2-13b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo gundamf90/HuatuoGPT2-13B-Q8_0-GGUF --hf-file huatuogpt2-13b-q8_0.gguf -c 2048
```
|
arshiaafshani/Arsh-llm-base | arshiaafshani | 2025-04-27T18:24:16Z | 0 | 1 | null | [
"safetensors",
"llama",
"text-generation",
"license:mit",
"region:us"
] | text-generation | 2025-04-27T17:31:59Z | ---
license: mit
pipeline_tag: text-generation
--- |
vannu31/Nikkisach59540 | vannu31 | 2025-04-27T18:23:27Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-27T18:23:14Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Nikkisach59540
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Nikkisach59540
<Gallery />
## Model description
Mehak Indian Model 6
## Trigger words
You should use `Nikkisach59540` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/vannu31/Nikkisach59540/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
MiaowLab/dummy-tokenizer | MiaowLab | 2025-04-27T18:20:50Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T18:20:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shovit/MedTalk-Llama3.2-3B-lora | shovit | 2025-04-27T18:15:35Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T09:56:28Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shovit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BlandAIOrg/text_to_speech_two | BlandAIOrg | 2025-04-27T18:13:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T04:54:47Z | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlandAIOrg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BenevolenceMessiah/FuseChat-Qwen-2.5-7B-Instruct-Q8_0-GGUF | BenevolenceMessiah | 2025-04-27T18:11:32Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:FuseAI/FuseChat-3.0-DPO-Data",
"base_model:FuseAI/FuseChat-Qwen-2.5-7B-Instruct",
"base_model:quantized:FuseAI/FuseChat-Qwen-2.5-7B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T18:10:57Z | ---
base_model: FuseAI/FuseChat-Qwen-2.5-7B-Instruct
datasets:
- FuseAI/FuseChat-3.0-DPO-Data
tags:
- llama-cpp
- gguf-my-repo
---
# BenevolenceMessiah/FuseChat-Qwen-2.5-7B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`FuseAI/FuseChat-Qwen-2.5-7B-Instruct`](https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BenevolenceMessiah/FuseChat-Qwen-2.5-7B-Instruct-Q8_0-GGUF --hf-file fusechat-qwen-2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BenevolenceMessiah/FuseChat-Qwen-2.5-7B-Instruct-Q8_0-GGUF --hf-file fusechat-qwen-2.5-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BenevolenceMessiah/FuseChat-Qwen-2.5-7B-Instruct-Q8_0-GGUF --hf-file fusechat-qwen-2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BenevolenceMessiah/FuseChat-Qwen-2.5-7B-Instruct-Q8_0-GGUF --hf-file fusechat-qwen-2.5-7b-instruct-q8_0.gguf -c 2048
```
|
bunnycore/gemma-3-RP-Lora | bunnycore | 2025-04-27T18:08:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T18:08:25Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bunnycore
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pawin205/Qwen-7B-Review-ICLR-GRPO-U | pawin205 | 2025-04-27T18:06:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T18:03:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StevenMD/llama1.1v1 | StevenMD | 2025-04-27T18:02:40Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T17:28:20Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** StevenMD
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
migarasathsara/deepseek-testgen-lora | migarasathsara | 2025-04-27T18:02:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T18:00:35Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** migarasathsara
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF | mradermacher | 2025-04-27T18:00:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"en",
"dataset:Neelectric/OpenR1-Math-cn_k12-86k",
"base_model:Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.03",
"base_model:quantized:Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.03",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-27T16:27:31Z | ---
base_model: Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.03
datasets: Neelectric/OpenR1-Math-cn_k12-86k
language:
- en
library_name: transformers
model_name: OLMo-2-1124-7B-Instruct_GRPOv01.03
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.03
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ1_S.gguf) | i1-IQ1_S | 1.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ1_M.gguf) | i1-IQ1_M | 2.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ2_S.gguf) | i1-IQ2_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ2_M.gguf) | i1-IQ2_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q2_K.gguf) | i1-Q2_K | 3.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ3_M.gguf) | i1-IQ3_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.03-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.03.i1-Q6_K.gguf) | i1-Q6_K | 6.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TOMFORD79/TF_V1.6 | TOMFORD79 | 2025-04-27T17:55:32Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-27T17:39:39Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aleegis/1649da14-d6dd-458a-8a2b-78dd266886f6 | aleegis | 2025-04-27T17:54:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T15:57:48Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1649da14-d6dd-458a-8a2b-78dd266886f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 885521bee72f9583_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/885521bee72f9583_train_data.json
type:
field_instruction: prompt
field_output: system
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/1649da14-d6dd-458a-8a2b-78dd266886f6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/885521bee72f9583_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 3628191f-3b94-4fac-b2b5-ce4b525a58f8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3628191f-3b94-4fac-b2b5-ce4b525a58f8
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 1649da14-d6dd-458a-8a2b-78dd266886f6
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vpakarinen/Skynet-T1-Qwen-7B-lora-v01 | vpakarinen | 2025-04-27T17:53:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:vpakarinen/uncensored-hacking",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-27T17:13:06Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
datasets:
- vpakarinen/uncensored-hacking
model-index:
- name: outputs/mymodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-7B-Instruct
bf16: auto
dataset_processes: 32
chat_template: alpaca
per_device_train_batch_size: 1
datasets:
- message_property_mappings:
content: content
role: role
path: vpakarinen/uncensored-hacking
type: alpaca
trust_remote_code: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
learning_rate: 0.0002
lisa_layers_attribute: model.layers
load_best_model_at_end: false
load_in_4bit: true
load_in_8bit: false
lora_alpha: 16
lora_dropout: 0.05
lora_r: 8
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
loraplus_lr_embedding: 1.0e-06
lr_scheduler: cosine
max_prompt_len: 512
mean_resizing_embeddings: false
micro_batch_size: 8
num_epochs: 1.0
optimizer: paged_adamw_8bit
output_dir: ./outputs/mymodel
pretrain_multipack_attn: true
pretrain_multipack_buffer_size: 10000
qlora_sharded_model_loading: false
ray_num_workers: 1
resources_per_worker:
GPU: 1
sample_packing_bin_size: 200
sample_packing_group_size: 100000
save_only_model: false
save_safetensors: true
sequence_len: 4096
shuffle_merged_datasets: true
skip_prepare_dataset: false
strict: false
train_on_inputs: false
trl:
log_completions: false
ref_model_mixup_alpha: 0.9
ref_model_sync_steps: 64
sync_ref_model: false
use_vllm: false
vllm_device: auto
vllm_dtype: auto
vllm_gpu_memory_utilization: 0.9
use_ray: false
val_set_size: 0.0
weight_decay: 0.0
```
</details><br>
Fine-tuned version of [unsloth/Qwen2.5-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct) on the vpakarinen/uncensored-hacking dataset.
This lora model is uncensored and trained on custom dataset that focus in cyber security and tech. |
Jonjew/LilyCollinsCa2008 | Jonjew | 2025-04-27T17:52:12Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-04-27T17:52:01Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:Lily_Collins_Ca2008:1> woman Film Still Print, overlaid on the very
bottom of the image is silver hand-written text that says "all my love",
Looking Directly At The Viewer, Centered, Making Eye Contact, Looking
Straight Ahead, Looking Forward, Striking A Dynamic Pose,
<lora:zz_s_Chest_Size_Slider:-2> buttoned up top
output:
url: images/Lily_Collins_Ca2008_0011.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: woman
license: unknown
---
# Lily Collins (Ca 2008) by matziq
<Gallery />
## Model description
FROM https://civitai.com/models/1510831/lily-collins-ca-2008?modelVersionId=1709008
Please support the original creator by donating BUZZ and LIKING at the PAGE ABOVE
Trigger woman
## Trigger words
You should use `woman` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/LilyCollinsCa2008/tree/main) them in the Files & versions tab.
|
ver-lady-alvarez-polemica-viral-video/Ver.video.Lady.alvarez.polemica.viral.el.contenido.de.la.chonera.bonita | ver-lady-alvarez-polemica-viral-video | 2025-04-27T17:48:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T17:47:52Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
ver-lady-alvarez-polemica-viral-video/Ver.Lady.alvarez.polemica.viral.el.contenido.de.la.chonera.bonita | ver-lady-alvarez-polemica-viral-video | 2025-04-27T17:47:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T17:46:58Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
np-hacks/speech-emotion-recognition | np-hacks | 2025-04-27T17:43:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T17:42:59Z | ---
license: apache-2.0
---
|
enferAI/Qwen2.5-7B-Instruct-FP8-dynamic | enferAI | 2025-04-27T17:43:01Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-04-27T17:39:22Z | ---
license: apache-2.0
---
# Qwen2.5-7B-Instruct-FP8-dynamic
Quantized version of [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet
below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen2.5-7B-Instruct"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(
model_stub,
torch_dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear",
scheme="FP8_DYNAMIC",
ignore=["lm_head"],
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
|
kayrab/turkish-gpt2-large-gpt4o-qa | kayrab | 2025-04-27T17:41:20Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"turkish",
"text-generation",
"question-answering",
"instruction-following",
"gpt4o",
"fine-tuning",
"lora",
"tr",
"base_model:ytu-ce-cosmos/turkish-gpt2-large",
"base_model:adapter:ytu-ce-cosmos/turkish-gpt2-large",
"license:mit",
"region:us"
] | question-answering | 2025-04-27T16:43:04Z | ---
license: mit
language: tr
base_model: ytu-ce-cosmos/turkish-gpt2-large
pipeline_tag: question-answering
tags:
- turkish
- gpt2
- text-generation
- question-answering
- instruction-following
- gpt4o
- fine-tuning
- lora
---
# Turkish GPT-2 Large - GPT-4o Soru-Cevap İnce Ayarlı Model (kayrab/turkish-gpt2-large-gpt4o-qa)
Bu model, [ytu-ce-cosmos/turkish-gpt2-large](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2-large) temel alınarak, belirli bir soru-cevap veri kümesi üzerinde **LoRA (Low-Rank Adaptation)** yöntemiyle ince ayarlanmış (fine-tuned) bir Türkçe dil modelidir.
## Model Açıklaması
Model, kendisine `<SORU>` ve `<CEVAP>` etiketleriyle yapılandırılmış bir biçimde sunulan sorulara yanıt vermek üzere eğitilmiştir. Eğitimde kullanılan cevaplar **GPT-4o** modeli tarafından üretilmiştir. Amaç, temel modelin belirli bir talimat biçimine uyarak tutarlı ve bağlama uygun cevaplar üretme yeteneğini geliştirmektir.
## Eğitim Verisi
Model, aşağıdaki yapıya sahip bir `.csv` dosyasındaki verilerle eğitilmiştir:
* **Soru:** Türkçe sorunun metni.
* **gpt4o cevabı:** İlgili soru için GPT-4o tarafından üretilmiş cevap metni.
Eğitim sırasında veri, modelin girdi/çıktı sınırlarını anlaması için özel etiketlerle biçimlendirilmiştir:
```python
<SORU> [Soru metni buraya gelecek] </SORU> <CEVAP> [Cevap metni buraya gelecek] </CEVAP><|endoftext|>
```
* `<SORU>` ve `</SORU>`: Sorunun başlangıcını ve bitişini işaretler.
* `<CEVAP>` ve `</CEVAP>`: Cevabın başlangıcını ve bitişini işaretler.
* `<|endoftext|>`: GPT-2'nin standart metin sonu (EOS) belirteci olup, her örneğin bittiğini gösterir.
Bu özel belirteçler tokenizer'a eklenmiş ve modelin kelime dağarcığı genişletilmiştir.
## Eğitim Prosedürü
Model, Hugging Face `transformers` ve `trl` (Transformer Reinforcement Learning) kütüphaneleri kullanılarak `SFTTrainer` (Supervised Fine-tuning Trainer) ile eğitilmiştir. Eğitimde kullanılan temel hiperparametreler şunlardır:
* **Öğrenme Oranı (Learning Rate):** 1e-4
* **Batch Büyüklüğü (Per Device):** 2
* **Gradyan Biriktirme Adımları (Gradient Accumulation Steps):** 8 (Etkin batch büyüklüğü: 2 * 8 * #GPU)
* **Epoch Sayısı:** 2
* **Maksimum Sekans Uzunluğu (Max Sequence Length):** 1024 token
* **Optimizatör (Optimizer):** paged_adamw_8bit (Bellek verimliliği için)
* **Ağırlık Azaltma (Weight Decay):** 0.01
* **Isınma Oranı (Warmup Ratio):** 0.03
* **LR Zamanlayıcı Tipi (LR Scheduler Type):** linear
* **Maksimum Gradyan Normu (Max Grad Norm):** 0.1
* **LoRA Rank (r):** 8
* **LoRA Alpha (α):** 16
* **LoRA Hedef Modüller (Target Modules):** `c_attn`, `c_proj`, `c_fc` (GPT-2 mimarisine uygun dikkat ve feed-forward katmanları)
* **Eğitim Hassasiyeti:** fp16
Eğitim sırasında, padding belirteçleri ve özel `<SORU>`, `</SORU>`, `<CEVAP>` belirteçleri kayıp (loss) hesaplamasından maskelenmiştir (`ignore_index = -100`). Yalnızca cevap kısmındaki (`</CEVAP>` hariç) belirteçler üzerinden öğrenme gerçekleşmiştir.
## Eğitim Kayıp Grafiği (Training Loss):
Eğitim süreci boyunca kayıp değerinin (loss) değişimi aşağıdaki grafikte görülebilir.

## Nasıl Kullanılır
Modeli `transformers` kütüphanesi ile kolayca kullanabilirsiniz. Model, girdiyi eğitimde kullanılan biçimde beklemektedir (`<SORU> ... </SORU> <CEVAP>`).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Model ve tokenizer adını belirtin
model_name = "kayrab/turkish-gpt2-large-gpt4o-qa"
# Tokenizer'ı yükleyin (use_fast=True önerilir)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
# Modeli yükleyin (GPU varsa otomatik olarak GPU'ya yükler)
# Düşük bellekli GPU'lar için dtype=torch.float16 veya torch.bfloat16 kullanabilirsiniz
model = AutoModelForCausalLM.from_pretrained(
model_name,
# torch_dtype=torch.float16, # Opsiyonel: fp16 kullanmak için
device_map="auto" # Modeli uygun cihaza (GPU/CPU) dağıtır
)
# Kullanılacak soruyu tanımlayın
soru = "Türkiye'nin en kalabalık şehri hangisidir ve neden önemlidir?"
# Soruyu modelin beklediği biçime getirin
# Dikkat: Prompt'un sonunda <CEVAP> etiketi ve bir boşluk olmalı!
prompt = f"<SORU> {soru} </SORU> <CEVAP> "
# Girdiyi token'lara çevirin ve modelin cihazına gönderin
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to(model.device)
# Cevap üretme parametreleri
# </CEVAP> token'ını EOS (End Of Sentence) olarak kullanacağız
eos_token_id = tokenizer.convert_tokens_to_ids("</CEVAP>")
if eos_token_id == tokenizer.unk_token_id: # Eğer token eklenmemişse (nadiren olur)
eos_token_id = tokenizer.eos_token_id
# Metin üretme (generate) fonksiyonunu çağırın
outputs = model.generate(
**inputs,
max_new_tokens=150, # Üretilecek maksimum yeni token sayısı
eos_token_id=eos_token_id, # Bu token üretildiğinde dur
pad_token_id=tokenizer.eos_token_id, # Padding için EOS kullan
do_sample=True, # Olasılıksal örnekleme yap
temperature=0.7, # Daha tutarlı çıktılar için sıcaklığı düşür
top_p=0.9, # Nucleus sampling
no_repeat_ngram_size=3 # 3-gram tekrarını engelle
)
# Üretilen tokenları alın (girdi prompt'u hariç)
output_tokens = outputs[0, inputs["input_ids"].shape[1]:]
# Tokenları metne çevirin
# skip_special_tokens=True, özel token'ları (örn: <|endoftext|>) çıktıdan kaldırır
cevap = tokenizer.decode(output_tokens, skip_special_tokens=True)
# </CEVAP> etiketi kalıntılarını temizle (generate bazen tam EOS'ta durmaz)
cevap_temiz = cevap.split("</CEVAP>")[0].strip()
print("-" * 20)
print(f"Soru: {soru}")
print("-" * 20)
print(f"Üretilen Cevap: {cevap_temiz}")
print("-" * 20)
# Örnek Çıktı (Modele göre değişebilir):
# --------------------
# Soru: Türkiye'nin en kalabalık şehri hangisidir ve neden önemlidir?
# --------------------
# Üretilen Cevap: Türkiye'nin en kalabalık şehri İstanbul'dur. İstanbul, tarihi, kültürel ve ekonomik açıdan büyük bir öneme sahiptir. İki kıtayı birbirine bağlayan stratejik konumu, zengin tarihi mirası ve Türkiye ekonomisinin merkezi olması nedeniyle önemlidir.
# --------------------
```
## Değerlendirme Sonuçları
Modelin performansı, eğitim veri kümesinde bulunmayan, özel olarak hazırlanmış bir soru kümesi üzerinde de sınanmıştır. Bu sınama için kullanılan sorular ve modelin ürettiği cevaplar `gpt2_large_gpt4o.csv` dosyasında yer almaktadır.
[gpt2_large_gpt4o.csv](gpt2_large_gpt4o.csv) dosyasını inceleyerek modelin farklı türdeki sorulara verdiği yanıtların kalitesini görebilirsiniz.
## Sınırlılıklar ve Dikkat Edilmesi Gerekenler
* Modelin performansı, girdi sorusunun eğitim verisindeki biçim ve tarza ne kadar benzediğine bağlıdır.
* Model, temel modelden (turkish-gpt2-large) ve eğitim verisinden (GPT-4o cevapları) kaynaklanan yanlılıkları (bias) miras almış olabilir.
* Üretilen cevapların doğruluğu her zaman garanti edilmez ve kritik uygulamalar için kontrol edilmelidir.
* Model, `<SORU> ... </SORU> <CEVAP>` biçimi dışında verilen girdilere beklenmedik veya anlamsız yanıtlar üretebilir.
# Turkish GPT-2 Large - GPT-4o Question-Answering Fine-tuned Model (kayrab/turkish-gpt2-large-gpt4o-qa)
This model is a Turkish language model fine-tuned using the **LoRA (Low-Rank Adaptation)** method on a specific question-answering dataset, based on [ytu-ce-cosmos/turkish-gpt2-large](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2-large).
## Model Description
The model is trained to respond to questions presented in a structured format with `<SORU>` and `<CEVAP>` tags. The answers used during training were generated by the **GPT-4o** model. The goal is to enhance the base model's ability to produce consistent and contextually appropriate answers following a specific instruction format.
## Training Data
The model was trained on data from a `.csv` file with the following structure:
* **Soru:** The text of the Turkish question.
* **gpt4o cevabı:** The answer text generated by GPT-4o for the corresponding question.
During training, the data was formatted with special tags to help the model understand input/output boundaries:
```python
<SORU> [Question text here] </SORU> <CEVAP> [Answer text here] </CEVAP><|endoftext|>
```
* `<SORU>` and `</SORU>`: Mark the beginning and end of the question.
* `<CEVAP>` and `</CEVAP>`: Mark the beginning and end of the answer.
* `<|endoftext|>`: GPT-2's standard end-of-text (EOS) token, indicating the end of each example.
These special tokens were added to the tokenizer, expanding the model's vocabulary.
## Training Procedure
The model was trained using the Hugging Face `transformers` and `trl` (Transformer Reinforcement Learning) libraries with the `SFTTrainer` (Supervised Fine-tuning Trainer). The core hyperparameters used during training are:
* **Learning Rate:** 1e-4
* **Batch Size (Per Device):** 2
* **Gradient Accumulation Steps:** 8 (Effective batch size: 2 * 8 * #GPUs)
* **Number of Epochs:** 2
* **Maximum Sequence Length:** 1024 tokens
* **Optimizer:** paged_adamw_8bit (For memory efficiency)
* **Weight Decay:** 0.01
* **Warmup Ratio:** 0.03
* **LR Scheduler Type:** linear
* **Max Grad Norm:** 0.1
* **LoRA Rank (r):** 8
* **LoRA Alpha (α):** 16
* **LoRA Target Modules:** `c_attn`, `c_proj`, `c_fc` (Attention and feed-forward layers suitable for GPT-2 architecture)
* **Training Precision:** fp16
During training, padding tokens and the special tokens `<SORU>`, `</SORU>`, `<CEVAP>` were masked from the loss calculation (`ignore_index = -100`). Learning occurred only over the tokens in the answer part (excluding `</CEVAP>`).
## Training Loss Graph:
The change in the loss value during the training process can be seen in the graph below.

## How to Use
You can easily use the model with the `transformers` library. The model expects the input in the format used during training (`<SORU> ... </SORU> <CEVAP>`).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Specify the model and tokenizer name
model_name = "kayrab/turkish-gpt2-large-gpt4o-qa"
# Load the tokenizer (use_fast=True is recommended)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
# Load the model (automatically loads to GPU if available)
# For low-memory GPUs, you can use dtype=torch.float16 or torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(
model_name,
# torch_dtype=torch.float16, # Optional: to use fp16
device_map="auto" # Distributes the model to the appropriate device (GPU/CPU)
)
# Define the question to use
soru = "Türkiye'nin en kalabalık şehri hangisidir ve neden önemlidir?" # "Which is Turkey's most populous city and why is it important?"
# Format the question into the format expected by the model
# Note: The prompt must end with the <CEVAP> tag and a space!
prompt = f"<SORU> {soru} </SORU> <CEVAP> "
# Tokenize the input and send it to the model's device
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to(model.device)
# Answer generation parameters
# We will use the </CEVAP> token as EOS (End Of Sentence)
eos_token_id = tokenizer.convert_tokens_to_ids("</CEVAP>")
if eos_token_id == tokenizer.unk_token_id: # If the token wasn't added (rarely happens)
eos_token_id = tokenizer.eos_token_id
# Call the text generation (generate) function
outputs = model.generate(
**inputs,
max_new_tokens=150, # Maximum number of new tokens to generate
eos_token_id=eos_token_id, # Stop when this token is generated
pad_token_id=tokenizer.eos_token_id, # Use EOS for padding
do_sample=True, # Perform probabilistic sampling
temperature=0.7, # Lower temperature for more consistent outputs
top_p=0.9, # Nucleus sampling
no_repeat_ngram_size=3 # Prevent 3-gram repetition
)
# Get the generated tokens (excluding the input prompt)
output_tokens = outputs[0, inputs["input_ids"].shape[1]:]
# Decode the tokens into text
# skip_special_tokens=True removes special tokens (e.g., <|endoftext|>) from the output
cevap = tokenizer.decode(output_tokens, skip_special_tokens=True)
# Clean up any </CEVAP> tag remnants (generate sometimes doesn't stop exactly at EOS)
cevap_temiz = cevap.split("</CEVAP>")[0].strip()
print("-" * 20)
print(f"Soru (Question): {soru}")
print("-" * 20)
print(f"Üretilen Cevap (Generated Answer): {cevap_temiz}")
print("-" * 20)
# Example Output (May vary depending on the model):
# --------------------
# Soru (Question): Türkiye'nin en kalabalık şehri hangisidir ve neden önemlidir?
# --------------------
# Üretilen Cevap (Generated Answer): Türkiye'nin en kalabalık şehri İstanbul'dur. İstanbul, tarihi, kültürel ve ekonomik açıdan büyük bir öneme sahiptir. İki kıtayı birbirine bağlayan stratejik konumu, zengin tarihi mirası ve Türkiye ekonomisinin merkezi olması nedeniyle önemlidir.
# (English: Turkey's most populous city is Istanbul. Istanbul holds great importance historically, culturally, and economically. It is important due to its strategic location connecting two continents, its rich historical heritage, and being the center of Turkey's economy.)
# --------------------
```
## Evaluation Results
The model's performance was also tested on a custom set of questions not present in the training dataset. The questions used for this test and the answers generated by the model are available in the `gpt2_large_gpt4o.csv` file.
You can examine the quality of the model's responses to different types of questions by reviewing the [gpt2_large_gpt4o.csv](gpt2_large_gpt4o.csv) file.
## Limitations and Considerations
* The model's performance depends on how closely the input question resembles the format and style of the training data.
* The model may have inherited biases from the base model (`turkish-gpt2-large`) and the training data (GPT-4o answers).
* The accuracy of the generated answers is not always guaranteed and should be verified for critical applications.
* The model might produce unexpected or nonsensical responses to inputs given outside the `<SORU> ... </SORU> <CEVAP>` format. |
LuckyLukke/DPO_5-1500 | LuckyLukke | 2025-04-27T17:37:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T17:34:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/DPO_5-1000 | LuckyLukke | 2025-04-27T17:37:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T17:34:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nepali-Gangu-Chettri-7-2-Kanda-Video-link/Kanda.Gangu.Chettri.7.2.minute.Videos.oficial | Nepali-Gangu-Chettri-7-2-Kanda-Video-link | 2025-04-27T17:37:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T17:36:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
03 seconds ago
L𝚎aked Video Gangu Chettri Kanda Video 7.2 Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Gangu Chettri Kanda Video 7.2 Video Viral Video L𝚎aked on X Twitter
Gangu Chettri Kanda Video 7.3 Original Video video oficial twitter
Video Gangu Chettri Kanda Video 7.3 Original Video Viral Video L𝚎aked on X Twitter
L𝚎aked Video Gangu Chettri Kanda Video 7.2 Original Video Viral Video L𝚎aked on X Twitter Telegram |
3-Aman-Ramgarhia-Go-Viral-Link/18-TRENDING.Aman.Ramgarhia.Viral.Video.Leaks.Tutorial | 3-Aman-Ramgarhia-Go-Viral-Link | 2025-04-27T17:32:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T17:30:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n98mstn?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor Aman ramgarhia Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor Aman ramgarhia, a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo.
L𝚎aked V𝚒deo Actor Aman ramgarhia V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor Aman ramgarhia V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+) |
yashparalkar0/bert-finetuned-pos | yashparalkar0 | 2025-04-27T17:16:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-27T15:47:06Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-pos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9257740463942541
- name: Recall
type: recall
value: 0.9257066601965304
- name: F1
type: f1
value: 0.9257403520691035
- name: Accuracy
type: accuracy
value: 0.9483275573381099
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-pos
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Precision: 0.9258
- Recall: 0.9257
- F1: 0.9257
- Accuracy: 0.9483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2395 | 1.0 | 1756 | 0.2447 | 0.9170 | 0.9157 | 0.9164 | 0.9417 |
| 0.1589 | 2.0 | 3512 | 0.2177 | 0.9245 | 0.9209 | 0.9227 | 0.9463 |
| 0.1191 | 3.0 | 5268 | 0.2140 | 0.9258 | 0.9257 | 0.9257 | 0.9483 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
TareksLab/MO-MODEL5-V0.2-LLaMa-70B | TareksLab | 2025-04-27T17:16:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Mawdistical/Vulpine-Seduction-70B",
"base_model:merge:Mawdistical/Vulpine-Seduction-70B",
"base_model:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T16:25:39Z | ---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- ReadyArt/Forgotten-Safeword-70B-v5.0
- Mawdistical/Vulpine-Seduction-70B
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/Forgotten-Safeword-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-v5.0)
* [Mawdistical/Vulpine-Seduction-70B](https://huggingface.co/Mawdistical/Vulpine-Seduction-70B)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Mawdistical/Vulpine-Seduction-70B
parameters:
select_topk: 0.5
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
select_topk: 0.5
- model: ReadyArt/Forgotten-Safeword-70B-v5.0
parameters:
select_topk: 0.5
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
select_topk: 0.5
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
parameters:
int8_mask: true
tokenizer:
source: union
chat_template: llama3
dtype: float32
out_dtype: bfloat16
```
|
JINKYU0612/llama-2-7b-bnb-4bit-aiaustin-demo | JINKYU0612 | 2025-04-27T17:14:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:quantized:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T17:12:29Z | ---
base_model: unsloth/llama-2-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JINKYU0612
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kamka112/exemple1-finetuned-emotions | kamka112 | 2025-04-27T17:12:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T16:57:49Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: exemple1-finetuned-emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exemple1-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the dair-ai/emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.5371 | 0.83 | 0.8086 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
zet1993/G52 | zet1993 | 2025-04-27T17:08:22Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"en",
"dataset:nvidia/OpenCodeReasoning",
"base_model:nari-labs/Dia-1.6B",
"base_model:adapter:nari-labs/Dia-1.6B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T17:06:47Z | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
language:
- en
metrics:
- accuracy
base_model:
- nari-labs/Dia-1.6B
new_version: nari-labs/Dia-1.6B
library_name: adapter-transformers
--- |
Sophie-Rain-Spider-man-Videos-HD/wATCH.Sophie.Rain.viral.video.original | Sophie-Rain-Spider-man-Videos-HD | 2025-04-27T17:06:17Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T17:05:19Z | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
Sophie-Rain-SpiderMan-X-Videos/Sophie.Rain.Sophie.Rain.Spiderman.Video.Leaks | Sophie-Rain-SpiderMan-X-Videos | 2025-04-27T17:06:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T17:04:26Z | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
adarshnagrikar/studio-ai | adarshnagrikar | 2025-04-27T17:05:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-13T09:52:41Z | ---
license: apache-2.0
---
|
linsanityuk/task-8-Qwen-Qwen2-7B-Instruct-1745773473 | linsanityuk | 2025-04-27T17:04:45Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"region:us"
] | null | 2025-04-27T17:04:33Z | ---
base_model: Qwen/Qwen2-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Mimomi/ppo-Huggy | Mimomi | 2025-04-27T16:57:21Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-04-27T16:56:51Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Mimomi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Nitral-AI/Violet_MagCap-12B-v1.5 | Nitral-AI | 2025-04-27T16:56:42Z | 12 | 1 | null | [
"safetensors",
"mistral",
"en",
"license:other",
"region:us"
] | null | 2025-04-26T05:08:11Z | ---
license: other
language:
- en
---
<div style="background: #000000; padding:30px; border-radius:18px; box-shadow: 0 0 15px #FFD70080, 0 0 30px #FFD70033; color:#fff; max-width:900px; margin:auto; font-family:'Roboto', sans-serif; border:1px solid #FFD70040;">
<style>
/* Importing custom fonts */
@import url('https://fonts.googleapis.com/css2?family=Roboto:wght@400;500;700&display=swap');
/* Global Styles */
body {
margin: 0;
padding: 0;
background: #111;
font-family: 'Roboto', sans-serif;
color: #fff;
}
.gold-btn {
display: inline-block;
background: #111;
border: 1px solid #FFD700;
color: #FFD700;
border-radius: 25px;
padding: 12px 24px;
text-decoration: none;
font-weight: 500;
font-size: 1.1em;
margin: 8px 6px;
transition: all 0.4s ease;
box-shadow: 0 0 6px #FFD70040;
}
.gold-btn:hover {
background: linear-gradient(90deg, #FFD700, #ffcc00);
color: #000;
box-shadow: 0 0 18px #FFD700AA, 0 0 36px #FFD70055;
transform: translateY(-3px);
}
/* Smooth fade-in animation */
.fade-in {
animation: fadeIn 1s ease-in;
}
@keyframes fadeIn {
0% { opacity: 0; }
100% { opacity: 1; }
}
/* Fancy image hover effect */
.fancy-img {
transition: all 0.5s ease;
border-radius: 12px;
border: 1px solid #FFD70033;
margin-bottom: 2em;
box-shadow: 0 0 12px #FFD70033;
}
.fancy-img:hover {
transform: scale(1.05);
box-shadow: 0 0 24px #FFD70066, 0 0 36px #FFD70033;
border-color: #FFD70088;
}
/* Glassmorphism effect */
.glass-card {
background: rgba(0, 0, 0, 0.6);
backdrop-filter: blur(10px);
border-radius: 12px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
padding: 20px;
margin-bottom: 2em;
}
/* Button Pulse Animation */
.pulse-btn {
animation: pulse 1.5s infinite;
}
@keyframes pulse {
0% { box-shadow: 0 0 5px #FFD700; }
50% { box-shadow: 0 0 15px #FFD700; }
100% { box-shadow: 0 0 5px #FFD700; }
}
/* Hover effect for paragraphs and headings */
h1, h2, h3 {
transition: transform 0.3s ease-in-out;
}
h1:hover, h2:hover, h3:hover {
transform: translateY(-5px);
color: #FFD700;
}
</style>
<h1 class="fade-in" style="color:#FFD700; font-size:2.5em; margin-bottom:0.3em;">🧠 Violet-Magcap-12B - v1.5</h1>
<p style="color:#ccc; font-style:italic;">What's new in version 1.5?</p>
<p> The reasoning format now uses <strong><think></strong> ... <strong></think></strong> xml tags , i’ve also added reasoning length control, so you can fine-tune the reasoning output. Screenshots below for a visual reference, plus importable presets in the repo.</p>
<hr style="border:1px solid #FFD700; margin:2em 0;">
<h2 style="color:#FFD700;">⚙️ Usage Presets</h2>
<p><a href="https://huggingface.co/Nitral-AI/Violet_MagCap-12B-v1.5/tree/main/SillyTavern" class="gold-btn pulse-btn">🎛️ SillyTavern Presets</a></p>
<hr style="border:1px solid #FFD700; margin:2em 0;">
<h2 style="color:#FFD700;">💾 Quantized Versions</h2>
<p> 🧠 <a href="https://huggingface.co/Nitrals-Quants/Violet_MagCap-12B-v1.5-Q5_K_M-GGUF" class="gold-btn">Q5_K_M-GGUF</a><br> 🧠 <a href="https://huggingface.co/Nitral-AI/Violet_MagCap-12B-v1.5-Q8_0-GGUF" class="gold-btn">Q8_0-GGUF</a> </p>
<hr style="border:1px solid #FFD700; margin:2em 0;">
<h2 style="color:#FFD700;">📦 Prompt Format</h2>
<p><strong style="color:#fff;">Reasoning Block + Prefix</strong></p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/a_r83vE8xPRnndkBYVrvj.png" alt="Reasoning Format" class="fancy-img fade-in" style="max-width:100%; margin-bottom:1em;">
<p><strong style="color:#fff;">ChatML Format</strong></p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/d-Hcpk-FB5tt_CPZdzay-.png" alt="ChatML Format" class="fancy-img fade-in" style="max-width:100%; margin-bottom:1em;">
<p><strong style="color:#fff;">Quick Reply's</strong></p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/ZrOx_-1G8L-iYZBKrgJwP.png" alt="Quick Reply Preset" class="fancy-img fade-in" style="max-width:100%;">
<hr style="border:1px solid #FFD700; margin:2em 0;">
<h2 style="color:#FFD700;">🌀 Vibe Check</h2>
<blockquote style="color:#ccc; font-style:italic; border-left:4px solid #FFD700; padding-left:1em; margin-left:0;"> It will help you solve problems. It will also make you question your existence.<br> <strong style="color:#FFD700;">Use wisely—or don’t.</strong> </blockquote>
<hr style="border:1px solid #FFD700; margin:2em 0;">
<p style="color:#FFD700;"><strong>🧬 Created by:</strong> <a href="https://huggingface.co/Nitral-AI" class="gold-btn">Nitral-AI</a> </p>
</div>
|
Moamen-dcp/whisper-turbo-CS-adapters-arazn1-mp3 | Moamen-dcp | 2025-04-27T16:56:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T16:56:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ReadyArt/GLM-4-32B-0414_EXL2_2.5bpw_H8 | ReadyArt | 2025-04-27T16:51:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:quantized:THUDM/GLM-4-32B-0414",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2025-04-27T16:29:06Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
base_model: THUDM/GLM-4-32B-0414
base_model_relation: quantized
quantized_by: ArtusDev
---
# GLM-4-32B-0414
## Introduction
The GLM family welcomes new members, the **GLM-4-32B-0414** series models, featuring 32 billion parameters. Its performance is comparable to OpenAI’s GPT series and DeepSeek’s V3/R1 series. It also supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including substantial reasoning-type synthetic data. This lays the foundation for subsequent reinforcement learning extensions. In the post-training stage, we employed human preference alignment for dialogue scenarios. Additionally, using techniques like rejection sampling and reinforcement learning, we enhanced the model’s performance in instruction following, engineering code, and function calling, thus strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in engineering code, Artifact generation, function calling, search-based Q&A, and report generation. In particular, on several benchmarks, such as code generation or specific Q&A tasks, GLM-4-32B-Base-0414 achieves comparable performance with those larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with deep thinking capabilities. This was developed based on GLM-4-32B-0414 through cold start, extended reinforcement learning, and further training on tasks including mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During training, we also introduced general reinforcement learning based on pairwise ranking feedback, which enhances the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with rumination capabilities (against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model is capable of deeper and longer thinking to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). Z1-Rumination is trained through scaling end-to-end reinforcement learning with responses graded by the ground truth answers or rubrics and can make use of search tools during its deep thinking process to handle complex tasks. The model shows significant improvements in research-style writing and complex tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed all the aforementioned techniques to train a small model (9B). GLM-Z1-9B-0414 exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is top-ranked among all open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Showcase
### Animation Generation
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-Z1-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<video src="https://github.com/user-attachments/assets/849ff9fd-b54d-4c74-9ee5-3412e1a09e32"
style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<video src="https://github.com/user-attachments/assets/8dccdb9d-cc44-4732-b438-74a4e3cb9dfb"
style="width: 400px; height: 300px; object-fit: contain;" autoplay loop muted playsinline></video>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Use HTML to simulate the scenario of a small ball released from the center of a rotating hexagon. Consider the collision between the ball and the hexagon's edges, the gravity acting on the ball, and assume all collisions are perfectly elastic. (Prompt translated from Chinese)
</div>
</td>
</tr>
</table>
### Web Design
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/bd9c1fc1-c784-4e8f-9c76-5f7389a715f1"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Design a drawing board that supports custom function plotting, allowing adding and deleting custom functions, and assigning colors to functions. (Prompt translated from Chinese)
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/7ad12d52-9229-4278-8d1b-ffbf43e99070"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Design a UI for a mobile machine learning platform, which should include interfaces for training tasks, storage management, and personal statistics. The personal statistics interface should use charts to display the user's resource usage over a period. Use Tailwind CSS to style the page, and display these 3 mobile interfaces tiled on a single HTML page. (Prompt translated from Chinese) </div>
</td>
</tr>
</table>
### SVG Generation
<table>
<tr>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
<td style="text-align: center; font-size: 16px; font-weight: bold; padding: 10px; width: 420px;">
GLM-4-32B-0414
</td>
</tr>
<tr>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/9407e4c1-1876-4ab5-838c-839836fb418a"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;">
Create a misty Jiangnan scene using SVG. (Prompt translated from Chinese)
</div>
</td>
<td style="vertical-align: top; padding: 10px; width: 420px;">
<img src="https://github.com/user-attachments/assets/bcce8c5a-cedf-45c8-b666-ddb023d5b49c"/>
<div style="margin-top: 10px; font-size: 14px; color: #333; width: 400px;"> Use SVG to illustrate the training process of an LLM. (Prompt translated from Chinese) </div>
</td>
</tr>
</table>
### Search-Based Writing
For search-based writing tasks, we use the following system prompt to have the model respond based on search results:
```
请根据所给搜索返回结果对用户问题进行作答。
## 注意
1. 充分利用和整理收集到的信息,而不是简单的复制粘贴,生成符合用户要求且有深度的专业答案。
2. 所提供信息充分的情况下,你的回答需尽可能延长,从用户意图角度出发,提供具有足够信息量和多角度的回复。
3. 另外,并非所有的搜索结果都与用户问题密切相关,请仔细的甄别、筛选和利用。
4. 客观类问答的答案通常非常简短,你可以适当补充一到两句相关信息,以丰富内容。
5. 请确保你的回复格式美观、可读性强。对于多实体对比或列举,善用列表格式来帮助用户更好的理解信息。
6. 除非用户要求,否则你回答的语言请于用户提问语言保持一致。
7. 在适当情况下在句子末尾使用例如:【0†source】的格式引用搜索结果。
```
When using, you can obtain search results through methods such as `RAG` or `WebSearch`, and wrap them in `observation`, for example:
```json
[
{
"role": "user",
"content": "Explore the common characteristics of children's literature, with a focus on its narrative techniques and thematic tendencies. This includes narrative techniques: common approaches in children's literature such as first-person, third-person, omniscient narrator, and interactive narration, and their influence on young readers. It also includes thematic tendencies: recurring themes in children's literature such as growth, adventure, friendship, and family, with an analysis of how these themes impact children's cognitive and emotional development. Additionally, other universal features such as the use of personification, repetitive language, symbolism and metaphor, and educational value should be considered. Please provide a detailed analytical report based on academic research, classic examples of children's literature, and expert opinions."
},
{
"role": "observation",
"content": "【{id}†{title}†{url}】\n{content}"
},
...
]
```
For the above prompt, we use an internal or external search model to obtain the search results. Using the format shown above, we can generate the following analysis report:
<div style="height: 400px; width: 100%; overflow: auto; border: 5px solid #ddd; padding: 20px;">
# Analysis Report on Common Characteristics of Children's Literature
Children's literature, as a literary genre specifically created for children, possesses unique artistic features and expressive techniques. This report will comprehensively analyze the common characteristics of children's literature from three dimensions: narrative methods, thematic tendencies, and other universal features, integrating academic research, classic examples of children's literature, and expert viewpoints.
## Narrative Methods and Their Impact on Child Readers
The narrative methods of children's literature exhibit diverse characteristics, including first-person, third-person, narration, and interactive storytelling. These different narrative perspectives and methods profoundly influence children's reading experiences and cognitive development.
**First-person narration** is common in children's literature, unfolding the story from the perspective of a child protagonist. This narrative style bridges the gap between the reader and the story, making it easier for child readers to identify with the characters and develop emotional resonance. For example, many coming-of-age novels use first-person narration, allowing the protagonist to directly recount their experiences and feelings, making readers feel as though they are experiencing the joys and sorrows of growth alongside the protagonist. This narrative perspective lends the work a strong sense of authenticity and intimacy, helping to cultivate children's empathy【1†source】.
**Third-person narration** offers a broader perspective, allowing the author to flexibly switch between different characters' viewpoints and present richer layers of the story. In children's literature, third-person omniscient narration enables the author to control the narrative pace, revealing or concealing information as needed to guide children's attention. At the same time, third-person narration facilitates direct dialogue between the author and the reader, conveying values or explaining complex concepts through narration. This narrative method positively influences children's macro-thinking and comprehensive understanding【1†source】.
**Narration (authorial intrusion)** is a unique narrative technique in children's literature, where the author directly appears as the "storyteller," explaining the background, commenting on characters, or posing questions to the reader. This technique is particularly common in classic fairy tales, such as the opening lines of *Andersen's Fairy Tales*: "Once, there was a child..." Narration helps children understand the story's context, fills cognitive gaps, and conveys the author's educational intent. Research shows that appropriate authorial intrusion aids children in grasping the story's structure and improving reading comprehension【5†source】.
**Interactive storytelling** is a new trend in contemporary children's literature, especially prominent in the digital media era. Interactive storytelling breaks the traditional unidirectional author-reader relationship, encouraging child readers to participate in the story's creation, such as by choosing plot directions, character dialogues, or endings. This participatory reading enhances children's sense of agency and fosters decision-making skills and creative thinking. For example, some children's reading apps incorporate interactive elements, allowing children to influence the story's development through clicks, drag-and-drop actions, and other operations, thereby gaining a stronger sense of immersion and achievement【6†source】. Interactive storytelling transforms children from passive information recipients into active meaning-makers, uniquely contributing to the development of their subjectivity.
*Table: Common Narrative Methods in Children's Literature and Their Effects*
| **Narrative Method** | **Characteristics** | **Impact on Child Readers** | **Classic Examples** |
|----------------------|--------------------|----------------------------|---------------------|
| **First-Person** | Told from the child protagonist's perspective | Enhances immersion, fosters empathy | *Charlotte's Web*, *The Straw House* |
| **Third-Person** | Omniscient or limited perspective | Expands horizons, develops comprehensive understanding | *Harry Potter* series |
| **Narration** | Direct authorial intrusion into the narrative | Aids comprehension, conveys values | *Andersen's Fairy Tales* |
| **Interactive** | Encourages reader participation in creation | Cultivates agency and creative thinking | Children's interactive reading apps |
Notably, the narrative methods of children's literature are often closely intertwined with the **childhood perspective**. The childhood perspective does not necessarily mean the narrator must be a child but refers to the work's ability to describe the world to the greatest extent from a child's heart, expressing their inner psychology and external circumstances【2†source】. Through the childhood perspective, readers can embark on a spiritual journey with a child's mindset, a narrative strategy that creates a strong sense of realism, allowing child readers to achieve emotional identification and cognitive resonance during the reading process【1†source】. The use of the childhood perspective gives the work's language a perceptual and naive quality, often with a prose-like and spatial structure, artistic features that align with children's cognitive characteristics and aid their acceptance and understanding【2†source】.
## Thematic Tendencies and Their Impact on Children's Cognitive and Emotional Development
The thematic choices in children's literature exhibit distinct tendencies, with common themes including growth, adventure, friendship, and family. These themes not only form the core content of children's literature but also subtly influence children's cognitive development and emotional shaping.
**The theme of growth** is one of the central motifs in children's literature. Growth narratives are regarded as the artistic lifeblood of children's literature, focusing on depicting the pivotal moments of rapid psychological development in children, particularly the awakening and establishment of self-awareness【3†source】. Growth literature typically includes three elements: an artistic portrayal of the self-awareness construction process in growing adolescents, a developmental story with logical propulsion, and the presentation of the protagonist's spiritual trials and quest for direction【3†source】. By reading growth-themed works, child readers can indirectly experience the confusion and breakthroughs of growing up and understand the formation of self-identity. Classics such as Astrid Lindgren's *Pippi Longstocking* and Cao Wenxuan's *The Straw House* vividly depict children's psychological growth trajectories in specific environments. Research indicates that growth-themed literary works help children build a positive self-concept and develop the courage and resilience to face challenges, positively contributing to their psychological development【9†source】.
**The theme of adventure** holds an important place in children's literature, satisfying children's curiosity about exploring the unknown. Adventure stories often feature unusual settings and unknown challenges, with the protagonist growing through overcoming difficulties. Classics like *Robinson Crusoe* and *The Adventures of Tom Sawyer* attract child readers with thrilling plots while conveying the importance of qualities such as courage, wisdom, and perseverance. The impact of adventure themes on children's cognitive development mainly lies in expanding their imaginative space and fostering problem-solving skills. In adventure stories, children must analyze situations, make plans, and respond to unexpected events alongside the protagonist, a process that exercises their logical thinking and adaptability【14†source】. At the same time, the unfamiliar environments and novel experiences in adventure stories stimulate children's curiosity and desire to learn, laying the foundation for cultivating an exploratory spirit. As experts point out, excellent children's literature should be grounded in reality, rich in depth, and generate significant inspiration and感染力, guiding children to comprehensively understand the world【14†source】.
**The theme of friendship** is equally prevalent in children's literature, reflecting children's emphasis on peer relationships. Friendship and love are regarded as humanity's most precious qualities, often depicted in children's literature as beacons in the night, guiding children toward the future【9†source】. Friendship stories typically revolve around interactions between children, portraying positive behaviors such as sharing, cooperation, and understanding. Examples include the genuine friendships among the children at Tomoe Gakuen in *Totto-Chan: The Little Girl at the Window* and the promise and mutual aid between Wilbur and Charlotte in *Charlotte's Web*. These stories help child readers recognize the value of friendship and learn how to build and maintain interpersonal relationships. Research shows that children need peer support during their growth, as friends provide crucial emotional anchors, offering the greatest emotional support and comfort in unfamiliar environments【16†source】. By reading friendship-themed works, children can learn social skills, develop empathy, and cultivate a spirit of cooperation, qualities essential for their social development【17†source】.
**The theme of family** is an indispensable subject in children's literature, depicting the emotional bonds and interaction patterns among family members. As the primary setting for children's earliest socialization, the family atmosphere and parenting styles profoundly impact children's mental health【10†source】. Family stories in children's literature often focus on parent-child relationships, sibling bonds, and other dynamics, such as Alice's relationship with her sister in *Alice's Adventures in Wonderland* and the Little Prince's interactions with the rose in *The Little Prince*. These stories help children understand the responsibilities and expectations of family roles and learn to handle conflicts within the family. Research indicates that a positive family atmosphere and parental support promote the development of children's positive psychological traits, while adverse family environments and parenting behaviors negatively affect their mental health【10†source】【11†source】. By reading family-themed works, children can gain emotional support, learn skills for managing family relationships, and establish healthy family values.
*Table: Common Themes in Children's Literature and Their Impact on Child Development*
| **Theme Type** | **Content Representation** | **Impact on Cognitive Development** | **Impact on Emotional Development** | **Classic Examples** |
|---------------|---------------------------|-------------------------------------|-------------------------------------|---------------------|
| **Growth** | Awakening of self-awareness, psychological trials and breakthroughs | Establishes self-concept, fosters problem-solving skills | Shapes positive self-identity, enhances psychological resilience | *The Straw House*, *Pippi Longstocking* |
| **Adventure** | Exploring the unknown, overcoming challenges | Expands imaginative space, exercises logical thinking | Cultivates courage and perseverance | *Robinson Crusoe*, *The Adventures of Tom Sawyer* |
| **Friendship** | Peer interactions, mutual aid and cooperation | Learns social skills, understands interpersonal dynamics | Develops empathy, builds a sense of belonging | *Charlotte's Web*, *Totto-Chan: The Little Girl at the Window* |
| **Family** | Parent-child relationships, sibling bonds | Understands social roles, learns communication skills | Gains emotional support, establishes secure attachments | *Alice's Adventures in Wonderland*, *The Little Prince* |
Regarding thematic choices, children's literature researcher Zhu Ziqiang proposed the famous "Three Major Motifs" theory, categorizing children's literary works into "the motif of love," "the motif of the mischievous child," and "the motif of nature"【8†source】. The motif of love focuses on emotional connections between children and adults or peers; the motif of the mischievous child portrays children's free-spirited nature; and the motif of nature emphasizes the harmonious relationship between children and the natural environment. These three motifs reflect the richness of the children's world from different angles, providing diverse emotional experiences and cognitive frameworks for children. Notably, these themes do not exist in isolation; outstanding works often organically integrate multiple themes. For example, the *Harry Potter* series incorporates growth, friendship, adventure, and family elements, presenting child readers with a multidimensional spiritual world.
## Other Universal Features and Their Artistic Expression
In addition to narrative methods and thematic tendencies, children's literature exhibits a series of universal artistic features, including anthropomorphism, repetitive language, symbolism and metaphor, and educational significance. These features collectively constitute the unique aesthetic style of children's literature, subtly influencing children's cognitive development and aesthetic cultivation.
**Anthropomorphism** is one of the most distinctive artistic features of children's literature. In children's literary works, animals, plants, and even inanimate objects are often endowed with human thoughts, emotions, and behaviors, greatly enhancing the story's fun and imagination. Research shows that anthropomorphism is a frequently used technique by children's literature creators to attribute human characteristics to animals, enabling them to possess perception and communication abilities【19†source】. Through anthropomorphism, children can more easily understand abstract concepts and moral principles, as anthropomorphic characters translate complex ideas into familiar emotional and behavioral patterns. For example, in scientific fairy tales, anthropomorphic characters can help explain scientific principles, making abstract concepts tangible【18†source】. Anthropomorphism not only enriches the narrative techniques of children's literature but also provides children with a unique perspective for understanding the relationship between humans and nature. It is worth noting that excessive anthropomorphism may affect children's accurate understanding of the animal world, so modern children's literature pays more attention to balancing the natural attributes of characters with human characteristics when employing anthropomorphic techniques【19†source】.
**Repetitive language** is extremely common in children's literature, a linguistic feature rooted in oral traditions originally intended to aid memory and dissemination【20†source】. In children's literature, the repetitive use of words, phrases, or sentences serves multiple functions: constructing the story's framework, emphasizing key information, creating rhythm and musicality, and training children's vocabulary skills. For example, in *The Very Hungry Caterpillar*, the author repeatedly uses phrases like "On Monday, he ate one apple. On Tuesday, he ate two pears..." This not only builds the story's structure but also helps children learn numbers and days of the week. Repetitive structures also aid children in developing an awareness of language patterns during the early stages of language acquisition, fostering a sense of language and memory skills【21†source】. Research indicates that repetitive language in children's literature promotes children's language acquisition, helping them master vocabulary and syntactic rules. At the same time, this linguistic feature enhances the story's participatory nature, as children can often join in reciting the repetitive parts, gaining a sense of achievement.
**Symbolism and metaphor** are common expressive techniques in children's literature, conveying abstract meanings through concrete imagery. Symbolism uses specific objects to represent abstract concepts or emotions, while metaphor connects two different things through comparison, creating new meanings. In children's literature, symbolism and metaphor are usually presented in a simple and clear manner, avoiding overly complex interpretations. For example, the character configurations and metaphorical connotations in *The Wizard of Oz* are thought-provoking, as these characters not only breathe life into the story but also convey profound life philosophies through their symbolic meanings【24†source】. Symbolism and metaphor in children's literature are often related to themes such as growth, friendship, and courage, helping children understand abstract concepts through concrete and figurative expressions. Research shows that appropriate metaphors can promote children's cognitive development, stimulating their imagination and creativity【23†source】. As children grow older, their ability to understand symbolism and metaphor gradually improves, providing children's literature with multi-layered meaning spaces.
**Educational significance** is an indispensable component of children's literature, which inherently carries the gene of children's education【22†source】. Excellent children's literary works simultaneously possess entertainment and educational functions, not only helping children understand the objective world, enrich their inner emotions, and acquire life wisdom but also cultivating their perception, aesthetic sensibility, thinking skills, and creativity【15†source】. Educational significance in children's literature is often not directly presented through preaching but naturally revealed through the storyline and characters' fates. For example, many classic fairy tales convey the importance of qualities such as bravery and honesty through the protagonist's adventurous experiences, while popular science books introduces scientific knowledge through interesting plots and characters. Experts point out that children's literature writers should shoulder the importantence of education, incorporating care for children's mental growth into their works【22†source】. It is worth noting that the educational significance of children's literature should respect children's receptive abilities, avoiding excessive preaching or moral indoctrination, and instead naturally influencing children's values and behaviors through artistic appeal.
**Storytelling** is the most basic and essential feature of children's literature. Children's perceptual, imagery-driven, and novelty-seeking cognitive characteristics and receptive psychology further determine that "storytelling" is an indispensable ontological feature of children's literature【25†source】. Engaging plots are the most crucial aspect of children's literary works because, compared to adults, children's understanding of things relies mainly on intuition, and plots play a key role in guiding children's comprehension of stories【26†source】. The storytelling quality of children's literature is reflected in multiple aspects: clear cause-and-effect relationships, Compact narrative rhythm and satisfying endings. These elements work together to immerse children in the story world, providing emotional satisfaction and cognitive inspiration. As researchers have noted, plots must be performed by specific characters in specific situations to convey individual experiences in unique space-time environments【7†source】. In children's literature, storytelling is not merely an artistic technique but a bridge connecting children to the world. Through stories, children can safely experience various life scenarios and learn methods for challenges.
In terms of **language features**, children's literature typically adopts a concise, clear, and vivid language style, avoiding complex sentence structures and abstract vocabulary. This linguistic characteristic aligns with children's cognitive development levels, facilitating their understanding and acceptance. At the same time, the language of children's literature is often rich in rhythm and musicality, enhancing readability and memorability through techniques such as rhyming and repetition. For example, Michael Rosen's children's literary works extensively employ repetitive structures and rhymes, a language usage that helps children develop an awareness of language patterns during the early stages of language acquisition【21†source】. The language of children's literature also often includes rich sensory descriptions and emotional expressions, stimulating children's imagination through concrete and tangible imagery. Scholar Jay Davis's research shows that the interactive use of language in children's literature can influence children's language habits and promote their language development【21†source】.
In summary, these universal features of children's literature collectively constitute its unique artistic charm and educational value. Anthropomorphism and symbolism expand children's imaginative spaces, repetitive language and storytelling promote language acquisition and cognitive development, and the natural integration of educational significance achieves the artistic effect of "teaching through entertainment." These features do not exist in isolation but are interwoven and organically unified, collectively serving the comprehensive development of child readers.
## Conclusion
Through a systematic analysis of the narrative methods, thematic tendencies, and other universal features of children's literature, we can draw the following conclusions: As a special literary genre, the creation and reception of children's literature follow unique rules. In terms of narrative methods, children's literature flexibly employs various techniques such as first-person, third-person, narration, and interactive storytelling to adapt to children's cognitive characteristics and receptive psychology. Among these, the use of the childhood perspective is particularly important, as it enhances the work's sense of realism and intimacy, enabling child readers to develop emotional resonance【1†source】【2†source】. In terms of thematic choices, growth, adventure, friendship, and family constitute the main content of children's literature. These themes not only satisfy children's curiosity and desire to explore but also subtly influence their cognitive development and emotional shaping【3†source】【9†source】. Other universal features such as anthropomorphism, repetitive language, symbolism, and educational significance collectively form the unique artistic style and educational value of children's literature【18†source】【20†source】【24†source】.
These characteristics of children's literature do not exist in isolation but are interconnected and organically unified. For example, adventure themes are often combined with third-person omniscient narration to attract child readers through compact plots and vivid descriptions; friendship themes frequently employ first-person narration to enhance emotional resonance; and anthropomorphism is commonly found in nature-themed works, helping children understand the relationship between humans and nature. These features collectively serve the comprehensive development of child readers, meeting their entertainment needs while promoting their cognitive growth and emotional maturity.
From an academic research perspective, children's literature studies should emphasize the application of narrative theory, as narrative theory focuses more on the "how" of storytelling—narrative form—which aligns closely with the research focus of children's literature【0†source】. At the same time, cognitive research methods provide new perspectives for children's literature studies. By combining cognitive science with literary theory, we can gain a deeper understanding of how children's literature influences children's thinking and cognitive development【4†source】. Future research should continue to explore the application of these theoretical methods in children's literature studies while paying attention to the intersection and integration of children's literature with emerging fields such as digital media and interdisciplinary education.
From a creative practice perspective, children's literature writers should fully grasp children's cognitive characteristics and emotional needs, incorporating growth Care and educational wisdom into their work As experts have pointed out, excellent children's literary works should be grounded in reality, rich in depth, and generate significant infection and infectivity, guiding children to comprehensively understand the world and correctly recognize themselves and society【14†source】. At the same time, children's literature Creativity should keep pace with the times, addressing new problems and challenges faced by contemporary children, such as media literacy in the digital age and identity formation in multicultural contexts, to provide targeted spiritual nourishment for children.
From an educational application perspective, children's literature should fully leverage its unique role in children's mental growth. Through carefully designed reading activities, teachers and parents can help children deeply understand the themes and meanings in works, guiding them to connect reading experiences with real life. Research shows that children's literature plays an increasingly important role in language education, the construction of a reading society, and children's mental growth【22†source】. Therefore, children's literature should be incorporated as an important component of school and family education, promoting children's cognitive development and emotional maturity through activities such as reading sharing, role-playing, and creative writing.
In summary, as a unique art form and educational medium, the common characteristics of children's literature constitute an organic whole, collectively serving the comprehensive development of child readers. By deeply understanding these features and their mechanisms of influence, we can better create, research, and apply children's literature, providing high-quality spiritual nourishment for children's healthy growth. Future children's literature research should continue to deepen theoretical exploration, expand research methods, and strengthen interdisciplinary collaboration to address the ever-changing needs of children and the challenges of the times, promoting the continuous development of children's literature.
</div>
### Function Call
GLM-4-32B-0414 supports calling external tools in JSON format. This can be done via HuggingFace Transformers, vLLM, or sgLang.
The message format for tool calling is as follows:
```json=
{
"role": "asssitant",
"metadata": function_name,
"content": json.dumps(call_arguments, ensure_ascii=False)
}
```
The message format for tool execution results is as follows:
```json=
{
"role": "observation",
"content": json.dumps(tool_response, ensure_ascii=False) if not isinstance(tool_response, str) else tool_response
}
```
The following example demonstrates the process of GLM-4-32B-0414 calling a tool and generating a final response using HuggingFace Transformers.
```python
import json
import re
import ast
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-32B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
def is_function_call(single_message):
"""Determine whether the current system message is a function call."""
pattern = re.compile(r'([^\n`]*?)\n({.*?})(?=\w*\n|$)', re.DOTALL)
matches = pattern.findall(single_message)
if not matches:
return False
func_name, args_str = matches[0]
func_name = func_name.strip()
try:
parsed_args = json.loads(args_str)
except json.JSONDecodeError:
try:
parsed_args = ast.literal_eval(args_str)
except:
return False
return {"name": func_name, "arguments": parsed_args}
def realtime_aqi(city):
"""Weather Query Tool"""
if '北京' in city.lower():
return json.dumps({'city': '北京', 'aqi': '10', 'unit': 'celsius'}, ensure_ascii=False)
elif '上海' in city.lower():
return json.dumps({'city': '上海', 'aqi': '72', 'unit': 'fahrenheit'}, ensure_ascii=False)
else:
return json.dumps({'city': city, 'aqi': 'unknown'}, ensure_ascii=False)
def build_system_prompt(tools):
"""Construct system prompt based on the list of available tools."""
if tools is None:
tools = []
value = "# 可用工具"
contents = []
for tool in tools:
content = f"\n\n## {tool['function']['name']}\n\n{json.dumps(tool['function'], ensure_ascii=False, indent=4)}"
content += "\n在调用上述函数时,请使用 Json 格式表示调用的参数。"
contents.append(content)
value += "".join(contents)
return value
tools = [
{
"type": "function",
"function": {
"name": "realtime_aqi",
"description": "天气预报。获取实时空气质量。当前空气质量,PM2.5,PM10信息",
"parameters": {
"type": "object",
"properties": {
"city": {
"description": "城市名"
}
},
"required": [
"city"
]
}
}
}
]
system_prompt = build_system_prompt(tools)
message = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "北京和上海今天的天气情况"}
]
print(f"User Message: {message[-1]['content']}")
while True:
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 1024,
"do_sample": True,
}
out = model.generate(**generate_kwargs)
generate_resp = tokenizer.decode(out[0][inputs["input_ids"].shape[1]:-1], skip_special_tokens=False)
stop_sequence = tokenizer.decode(out[0][-1:], skip_speical_tokens=False)
if stop_sequence == "<|user|>":
print(f"Assistant Response: {generate_resp.strip()}")
break
function_calls = []
for m in generate_resp.split("<|assistant|>"):
fc_decode = is_function_call(m.strip())
if fc_decode:
message.append({"role": "assistant", "metadata": fc_decode['name'], "content": json.dumps(fc_decode['arguments'], ensure_ascii=False)})
print(f"Function Call: {fc_decode}")
function_calls.append(fc_decode)
else:
message.append({"role": "assistant", "content": m})
print(f"Assistant Response: {m.strip()}")
for fc in function_calls:
function_response = realtime_aqi(
city=fc["arguments"]["city"],
)
print(f"Function Response: {function_response}")
message.append({"role": "observation", "content": function_response})
```
## Evaluation Results
<div style="text-align: center;">
<img src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-32B.png" style="width: 80%;" />
</div>
### GLM-4-0414 Series
| 模型 | IFEval | BFCL-v3 (Overall) | BFCL-v3 (MultiTurn) | TAU-Bench (Retail) | TAU-Bench (Airline) | SimpleQA | HotpotQA |
| ---------------- | ------ | ----------------- | ------------------- | ------------------ | ------------------- | -------- | -------- |
| Qwen2.5-Max | 85.6 | 50.9 | 30.5 | 58.3 | 22.0 | 79.0 | 52.8 |
| GPT-4o-1120 | 81.9 | 69.6 | 41.0 | 62.8 | 46.0 | 82.8 | 63.9 |
| DeepSeek-V3-0324 | 83.4 | 66.2 | 35.8 | 60.7 | 32.4 | 82.6 | 54.6 |
| DeepSeek-R1 | 84.3 | 57.5 | 12.4 | 33.0 | 37.3 | 83.9 | 63.1 |
| GLM-4-32B-0414 | 87.6 | 69.6 | 41.5 | 68.7 | 51.2 | 88.1 | 63.8 |
> For `SimpleQA` and `HotpotQA`, we sampled nearly 500 test cases from each test set, provided all models with basic `search` and `click` tools, ensured other settings remained consistent, and averaged the results over 3 runs.
| Model | Framework | [SWE-bench Verified](https://openai.com/index/introducing-swe-bench-verified/) | [SWE-bench Verified mini](https://github.com/mariushobbhahn/SWEBench-verified-mini) |
|---|---|---|---|
| GLM-4-32B-0414 | Moatless<sup>[1]</sup> | 33.8 | 38.0 |
| GLM-4-32B-0414 | Agentless<sup>[2]</sup> | 30.7 | 34.0 |
| GLM-4-32B-0414 | OpenHands<sup>[3]</sup> | 27.2 | 28.0 |
[1] [Moatless v0.0.3](https://github.com/aorwall/moatless-tools) used the following parameters: `response_format="react", thoughts_in_action=False, max_interations=30`. No retries on failed trajectories; other settings are default.
[2] [Agentless v1.5.0](https://github.com/OpenAutoCoder/Agentless) used [BGE](https://github.com/FlagOpen/FlagEmbedding/blob/master/README.md) as the embedding model and [FAISS](https://github.com/facebookresearch/faiss) for similarity search. To speed up patch verification while maintaining performance, the timeout for running a single instance was changed from the default 300s to 180s.
[3] [OpenHands v0.29.1](https://github.com/All-Hands-AI/OpenHands/tree/main) did not use YaRN context extension but limited runs to a maximum of 60 iterations and summarized the history to prevent exceeding the 32K context limit. Summarization was configured as `llm_config="condenser", keep_first=1, max_size=32`. No retries on failed trajectories.
|
RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf | RichardErkhov | 2025-04-27T16:48:29Z | 11 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T06:43:08Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
krx_Qwen2.5_7B_it_v8 - GGUF
- Model creator: https://huggingface.co/buildquant/
- Original model: https://huggingface.co/buildquant/krx_Qwen2.5_7B_it_v8/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [krx_Qwen2.5_7B_it_v8.Q2_K.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q2_K.gguf) | Q2_K | 2.81GB |
| [krx_Qwen2.5_7B_it_v8.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [krx_Qwen2.5_7B_it_v8.IQ3_S.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [krx_Qwen2.5_7B_it_v8.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [krx_Qwen2.5_7B_it_v8.IQ3_M.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [krx_Qwen2.5_7B_it_v8.Q3_K.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q3_K.gguf) | Q3_K | 3.55GB |
| [krx_Qwen2.5_7B_it_v8.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [krx_Qwen2.5_7B_it_v8.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [krx_Qwen2.5_7B_it_v8.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [krx_Qwen2.5_7B_it_v8.Q4_0.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q4_0.gguf) | Q4_0 | 4.13GB |
| [krx_Qwen2.5_7B_it_v8.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [krx_Qwen2.5_7B_it_v8.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [krx_Qwen2.5_7B_it_v8.Q4_K.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q4_K.gguf) | Q4_K | 4.36GB |
| [krx_Qwen2.5_7B_it_v8.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [krx_Qwen2.5_7B_it_v8.Q4_1.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q4_1.gguf) | Q4_1 | 4.54GB |
| [krx_Qwen2.5_7B_it_v8.Q5_0.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q5_0.gguf) | Q5_0 | 4.95GB |
| [krx_Qwen2.5_7B_it_v8.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [krx_Qwen2.5_7B_it_v8.Q5_K.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q5_K.gguf) | Q5_K | 5.07GB |
| [krx_Qwen2.5_7B_it_v8.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [krx_Qwen2.5_7B_it_v8.Q5_1.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q5_1.gguf) | Q5_1 | 5.36GB |
| [krx_Qwen2.5_7B_it_v8.Q6_K.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q6_K.gguf) | Q6_K | 5.82GB |
| [krx_Qwen2.5_7B_it_v8.Q8_0.gguf](https://huggingface.co/RichardErkhov/buildquant_-_krx_Qwen2.5_7B_it_v8-gguf/blob/main/krx_Qwen2.5_7B_it_v8.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- krx
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** buildquant
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HERE-Sophie-Rain-Spiderman-Leaks-Videos/Sophie.Rain.Spider.Man.Leaks.Video.Sophie.Rain.Spiderman.Video.Tutorial.Link | HERE-Sophie-Rain-Spiderman-Leaks-Videos | 2025-04-27T16:44:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T16:43:39Z | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
Harsh7760/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_running_toad | Harsh7760 | 2025-04-27T16:41:28Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am prickly running toad",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-04T09:48:26Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_running_toad
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am prickly running toad
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_running_toad
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Harsh7760/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_running_toad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
leiredsol/distilbert-base-multilingual-cased-majority1.2 | leiredsol | 2025-04-27T16:38:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T13:24:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF | mradermacher | 2025-04-27T16:31:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Quinn777/AMATH-SFT",
"base_model:Quinn777/AtomThink-LLaVA1.5-7B",
"base_model:quantized:Quinn777/AtomThink-LLaVA1.5-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-27T15:50:26Z | ---
base_model: Quinn777/AtomThink-LLaVA1.5-7B
datasets:
- Quinn777/AMATH-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Quinn777/AtomThink-LLaVA1.5-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AtomThink-LLaVA1.5-7B-i1-GGUF/resolve/main/AtomThink-LLaVA1.5-7B.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kimxxxx/mistral_7b_r8_a8_g4_b4_1e_2e-5 | kimxxxx | 2025-04-27T16:29:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T16:29:16Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kimxxxx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shekar-ai/bert_tokenzier_Persian | shekar-ai | 2025-04-27T16:28:40Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T02:44:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits