modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 06:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
GoYM/gemma-product-description | GoYM | 2025-05-27T19:33:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-16T03:32:47Z | ---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-product-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="GoYM/gemma-product-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
smartmyapp/ladji5_2 | smartmyapp | 2025-05-27T19:33:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2025-05-27T12:17:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zidsi/Zlatorog-12B-Instruct-Beta-GGUF | zidsi | 2025-05-27T19:33:05Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"full",
"generated_from_trainer",
"text-generation",
"sl",
"en",
"base_model:zidsi/MistralNemoCPT6",
"base_model:quantized:zidsi/MistralNemoCPT6",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-23T15:41:35Z | ---
library_name: transformers
license: cc-by-nc-nd-4.0
base_model: zidsi/MistralNemoCPT6
tags:
- full
- generated_from_trainer
model-index:
- name: zlatorog_12b_sft_v6
results: []
language:
- sl
- en
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zlatorog-12B-Instruct-Beta
This model is a fine-tuned version of [zidsi/MistralNemoCPT6](https://huggingface.co/zidsi/MistralNemoCPT6) on the custom mix of SFT datasets.
## Model description
More information needed
## Intended uses & limitations
Research explore and have fun with Slovenian LLM :)
## Training and evaluation data
Bad standard Slovenian benchmarks results **but** sometimes impresssive "real world" prompt responses :)
Reduced hallucinations rate on "Who is ...?" prompts.
Tools use to be evaluated
Up to 16k ctx should work OK, for longer contexts training data would be required to improve CPT Long stage
More information needed
## GGUF
The HF model was coverted to GGUF using llama.cpp |
yutakas/llava-v1.6-mistral-7b-hf-test | yutakas | 2025-05-27T19:32:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:49:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vermoney/cdc5327d-3aaa-49a2-a63f-78fc847a8490 | vermoney | 2025-05-27T19:32:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T18:35:14Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cdc5327d-3aaa-49a2-a63f-78fc847a8490
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e9539959e5b475cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/cdc5327d-3aaa-49a2-a63f-78fc847a8490
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/e9539959e5b475cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 074d0027-87b6-4ea0-a8be-5f7675bf7878
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 074d0027-87b6-4ea0-a8be-5f7675bf7878
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# cdc5327d-3aaa-49a2-a63f-78fc847a8490
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7481 | 0.0132 | 280 | 0.9817 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JasperV13/yehia-7b-CoT | JasperV13 | 2025-05-27T19:31:54Z | 0 | 0 | null | [
"yehia_reasoning",
"arabic",
"reasoning",
"cot",
"chain-of-thought",
"yehia",
"custom_code",
"ar",
"base_model:Navid-AI/Yehia-7B-preview",
"base_model:finetune:Navid-AI/Yehia-7B-preview",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T19:31:53Z |
---
language: ar
license: apache-2.0
tags:
- arabic
- reasoning
- cot
- chain-of-thought
- yehia
base_model: Navid-AI/Yehia-7B-preview
---
# Yehia-7B Chain of Thought Model
نموذج يهيا المحسن بتقنية التفكير المتسلسل (Chain of Thought)
## الوصف
هذا النموذج يطبق تقنية التفكير المتسلسل تلقائياً عند طرح أي سؤال. يقوم النموذج بتقسيم المشكلة إلى خطوات منطقية متتابعة للوصول إلى الحل الصحيح.
## الاستخدام
```python
from transformers import AutoModel, AutoTokenizer
# تحميل النموذج
model = AutoModel.from_pretrained("JasperV13/yehia-7b-CoT", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("JasperV13/yehia-7b-CoT")
# طرح سؤال
question = "احسب 25 × 16"
inputs = tokenizer(question, return_tensors="pt")
outputs = model.generate(**inputs)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
```
## مثال على النتيجة
```
السؤال: احسب 25 × 16
دعني أحل هذا السؤال خطوة بخطوة:
الخطوة 1: سأقوم بضرب 25 في 16
الخطوة 2: يمكنني تقسيم هذا إلى (20 + 5) × 16
الخطوة 3: = (20 × 16) + (5 × 16)
الخطوة 4: = 320 + 80
الخطوة 5: = 400
الإجابة النهائية: 400
```
## المميزات
- ✅ تفكير خطوة بخطوة تلقائي
- ✅ محسن للغة العربية
- ✅ متوافق مع مكتبة transformers
- ✅ دقة عالية في الحسابات والمنطق
- ✅ سهولة الاستخدام
## متطلبات
```bash
pip install transformers torch
```
## النموذج الأساسي
يستخدم هذا النموذج `Navid-AI/Yehia-7B-preview` كنموذج أساسي مع إضافة طبقة التفكير المتسلسل.
## الاستخدام المتقدم
```python
# للحصول على تحكم أكبر في المعاملات
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("JasperV13/yehia-7b-CoT", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("JasperV13/yehia-7b-CoT")
# يمكنك تخصيص معاملات التوليد
question = "كيف أحل معادلة من الدرجة الثانية؟"
inputs = tokenizer(question, return_tensors="pt")
# مع معاملات مخصصة
outputs = model.generate(
**inputs,
max_length=500,
temperature=0.7,
do_sample=True
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
```
## الإصدار
- الإصدار: 1.0
- تاريخ الإنشاء: 2025
- المطور: JasperV13
## الترخيص
Apache 2.0
|
07-jobz-hunting-viral-video/Original.Full.Clip.Jobz.Hunting.Sajal.Malik.Viral.nimra.mehra.Video.Leaks.Official | 07-jobz-hunting-viral-video | 2025-05-27T19:30:55Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:30:32Z | <a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
nimra-mehra-hd/Link.Video.18.nimra.mehra.jobz.hunting.video.nimra.mehra.video.nimra.mehra | nimra-mehra-hd | 2025-05-27T19:30:23Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:26:02Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=nimra-mehra)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=nimra-mehra)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=nimra-mehra) |
task-aware/Llama_3.2_3B_Instruct | task-aware | 2025-05-27T19:30:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-24T16:38:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
goalaphx/outputs_qcm_then_fitb | goalaphx | 2025-05-27T19:26:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gguf",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T19:20:27Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
shishirahm3d/lawyer | shishirahm3d | 2025-05-27T19:26:05Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T03:30:50Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shishirahm3d
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
Alexhuou/MNLP_M2_document_encoder | Alexhuou | 2025-05-27T19:25:39Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"Sentence Transformers",
"en",
"arxiv:2308.03281",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-27T19:22:29Z | ---
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
model-index:
- name: gte-large
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.62686567164178
- type: ap
value: 34.46944126809772
- type: f1
value: 66.23684353950857
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.51805
- type: ap
value: 89.49842783330848
- type: f1
value: 92.51112169431808
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.074
- type: f1
value: 48.44785682572955
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.077
- type: map_at_10
value: 48.153
- type: map_at_100
value: 48.963
- type: map_at_1000
value: 48.966
- type: map_at_3
value: 43.184
- type: map_at_5
value: 46.072
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.54
- type: mrr_at_100
value: 49.335
- type: mrr_at_1000
value: 49.338
- type: mrr_at_3
value: 43.563
- type: mrr_at_5
value: 46.383
- type: ndcg_at_1
value: 32.077
- type: ndcg_at_10
value: 57.158
- type: ndcg_at_100
value: 60.324999999999996
- type: ndcg_at_1000
value: 60.402
- type: ndcg_at_3
value: 46.934
- type: ndcg_at_5
value: 52.158
- type: precision_at_1
value: 32.077
- type: precision_at_10
value: 8.591999999999999
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.275000000000002
- type: precision_at_5
value: 14.111
- type: recall_at_1
value: 32.077
- type: recall_at_10
value: 85.917
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 57.824
- type: recall_at_5
value: 70.555
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.619246083417295
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.3574067664688
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.06359661829253
- type: mrr
value: 76.15596007562766
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 90.25407547368691
- type: cos_sim_spearman
value: 88.65081514968477
- type: euclidean_pearson
value: 88.14857116664494
- type: euclidean_spearman
value: 88.50683596540692
- type: manhattan_pearson
value: 87.9654797992225
- type: manhattan_spearman
value: 88.21164851646908
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.05844155844157
- type: f1
value: 86.01555597681825
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10510519739522
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.84689960264385
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.800000000000004
- type: map_at_10
value: 44.857
- type: map_at_100
value: 46.512
- type: map_at_1000
value: 46.635
- type: map_at_3
value: 41.062
- type: map_at_5
value: 43.126
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 50.879
- type: mrr_at_100
value: 51.605000000000004
- type: mrr_at_1000
value: 51.641000000000005
- type: mrr_at_3
value: 48.14
- type: mrr_at_5
value: 49.835
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 51.819
- type: ndcg_at_100
value: 57.318999999999996
- type: ndcg_at_1000
value: 58.955999999999996
- type: ndcg_at_3
value: 46.409
- type: ndcg_at_5
value: 48.825
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 10.072000000000001
- type: precision_at_100
value: 1.625
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 22.556
- type: precision_at_5
value: 16.309
- type: recall_at_1
value: 32.800000000000004
- type: recall_at_10
value: 65.078
- type: recall_at_100
value: 87.491
- type: recall_at_1000
value: 97.514
- type: recall_at_3
value: 49.561
- type: recall_at_5
value: 56.135999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.614
- type: map_at_10
value: 43.578
- type: map_at_100
value: 44.897
- type: map_at_1000
value: 45.023
- type: map_at_3
value: 40.282000000000004
- type: map_at_5
value: 42.117
- type: mrr_at_1
value: 40.510000000000005
- type: mrr_at_10
value: 49.428
- type: mrr_at_100
value: 50.068999999999996
- type: mrr_at_1000
value: 50.111000000000004
- type: mrr_at_3
value: 47.176
- type: mrr_at_5
value: 48.583999999999996
- type: ndcg_at_1
value: 40.510000000000005
- type: ndcg_at_10
value: 49.478
- type: ndcg_at_100
value: 53.852
- type: ndcg_at_1000
value: 55.782
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 47.19
- type: precision_at_1
value: 40.510000000000005
- type: precision_at_10
value: 9.363000000000001
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 21.741
- type: precision_at_5
value: 15.465000000000002
- type: recall_at_1
value: 32.614
- type: recall_at_10
value: 59.782000000000004
- type: recall_at_100
value: 78.012
- type: recall_at_1000
value: 90.319
- type: recall_at_3
value: 46.825
- type: recall_at_5
value: 52.688
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.266000000000005
- type: map_at_10
value: 53.756
- type: map_at_100
value: 54.809
- type: map_at_1000
value: 54.855
- type: map_at_3
value: 50.073
- type: map_at_5
value: 52.293
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 57.116
- type: mrr_at_100
value: 57.767
- type: mrr_at_1000
value: 57.791000000000004
- type: mrr_at_3
value: 54.461999999999996
- type: mrr_at_5
value: 56.092
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 60.092
- type: ndcg_at_100
value: 64.034
- type: ndcg_at_1000
value: 64.937
- type: ndcg_at_3
value: 54.071000000000005
- type: ndcg_at_5
value: 57.254000000000005
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.799
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.368000000000002
- type: precision_at_5
value: 16.89
- type: recall_at_1
value: 40.266000000000005
- type: recall_at_10
value: 75.41499999999999
- type: recall_at_100
value: 92.01700000000001
- type: recall_at_1000
value: 98.379
- type: recall_at_3
value: 59.476
- type: recall_at_5
value: 67.297
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.589
- type: map_at_10
value: 37.755
- type: map_at_100
value: 38.881
- type: map_at_1000
value: 38.954
- type: map_at_3
value: 34.759
- type: map_at_5
value: 36.544
- type: mrr_at_1
value: 30.734
- type: mrr_at_10
value: 39.742
- type: mrr_at_100
value: 40.774
- type: mrr_at_1000
value: 40.824
- type: mrr_at_3
value: 37.137
- type: mrr_at_5
value: 38.719
- type: ndcg_at_1
value: 30.734
- type: ndcg_at_10
value: 42.978
- type: ndcg_at_100
value: 48.309000000000005
- type: ndcg_at_1000
value: 50.068
- type: ndcg_at_3
value: 37.361
- type: ndcg_at_5
value: 40.268
- type: precision_at_1
value: 30.734
- type: precision_at_10
value: 6.565
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 15.744
- type: precision_at_5
value: 11.096
- type: recall_at_1
value: 28.589
- type: recall_at_10
value: 57.126999999999995
- type: recall_at_100
value: 81.051
- type: recall_at_1000
value: 94.027
- type: recall_at_3
value: 42.045
- type: recall_at_5
value: 49.019
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.5
- type: map_at_10
value: 27.950999999999997
- type: map_at_100
value: 29.186
- type: map_at_1000
value: 29.298000000000002
- type: map_at_3
value: 25.141000000000002
- type: map_at_5
value: 26.848
- type: mrr_at_1
value: 22.637
- type: mrr_at_10
value: 32.572
- type: mrr_at_100
value: 33.472
- type: mrr_at_1000
value: 33.533
- type: mrr_at_3
value: 29.747
- type: mrr_at_5
value: 31.482
- type: ndcg_at_1
value: 22.637
- type: ndcg_at_10
value: 33.73
- type: ndcg_at_100
value: 39.568
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.505999999999997
- type: ndcg_at_5
value: 31.255
- type: precision_at_1
value: 22.637
- type: precision_at_10
value: 6.281000000000001
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 13.847000000000001
- type: precision_at_5
value: 10.224
- type: recall_at_1
value: 18.5
- type: recall_at_10
value: 46.744
- type: recall_at_100
value: 72.072
- type: recall_at_1000
value: 91.03999999999999
- type: recall_at_3
value: 32.551
- type: recall_at_5
value: 39.533
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.602
- type: map_at_10
value: 42.18
- type: map_at_100
value: 43.6
- type: map_at_1000
value: 43.704
- type: map_at_3
value: 38.413000000000004
- type: map_at_5
value: 40.626
- type: mrr_at_1
value: 37.344
- type: mrr_at_10
value: 47.638000000000005
- type: mrr_at_100
value: 48.485
- type: mrr_at_1000
value: 48.52
- type: mrr_at_3
value: 44.867000000000004
- type: mrr_at_5
value: 46.566
- type: ndcg_at_1
value: 37.344
- type: ndcg_at_10
value: 48.632
- type: ndcg_at_100
value: 54.215
- type: ndcg_at_1000
value: 55.981
- type: ndcg_at_3
value: 42.681999999999995
- type: ndcg_at_5
value: 45.732
- type: precision_at_1
value: 37.344
- type: precision_at_10
value: 8.932
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 20.276
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 30.602
- type: recall_at_10
value: 62.273
- type: recall_at_100
value: 85.12100000000001
- type: recall_at_1000
value: 96.439
- type: recall_at_3
value: 45.848
- type: recall_at_5
value: 53.615
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.952
- type: map_at_10
value: 35.177
- type: map_at_100
value: 36.59
- type: map_at_1000
value: 36.703
- type: map_at_3
value: 31.261
- type: map_at_5
value: 33.222
- type: mrr_at_1
value: 29.337999999999997
- type: mrr_at_10
value: 40.152
- type: mrr_at_100
value: 40.963
- type: mrr_at_1000
value: 41.016999999999996
- type: mrr_at_3
value: 36.91
- type: mrr_at_5
value: 38.685
- type: ndcg_at_1
value: 29.337999999999997
- type: ndcg_at_10
value: 41.994
- type: ndcg_at_100
value: 47.587
- type: ndcg_at_1000
value: 49.791000000000004
- type: ndcg_at_3
value: 35.27
- type: ndcg_at_5
value: 38.042
- type: precision_at_1
value: 29.337999999999997
- type: precision_at_10
value: 8.276
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 17.161
- type: precision_at_5
value: 12.671
- type: recall_at_1
value: 23.952
- type: recall_at_10
value: 57.267
- type: recall_at_100
value: 80.886
- type: recall_at_1000
value: 95.611
- type: recall_at_3
value: 38.622
- type: recall_at_5
value: 45.811
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.092083333333335
- type: map_at_10
value: 37.2925
- type: map_at_100
value: 38.57041666666666
- type: map_at_1000
value: 38.68141666666667
- type: map_at_3
value: 34.080000000000005
- type: map_at_5
value: 35.89958333333333
- type: mrr_at_1
value: 31.94758333333333
- type: mrr_at_10
value: 41.51049999999999
- type: mrr_at_100
value: 42.36099999999999
- type: mrr_at_1000
value: 42.4125
- type: mrr_at_3
value: 38.849583333333335
- type: mrr_at_5
value: 40.448249999999994
- type: ndcg_at_1
value: 31.94758333333333
- type: ndcg_at_10
value: 43.17633333333333
- type: ndcg_at_100
value: 48.45241666666668
- type: ndcg_at_1000
value: 50.513999999999996
- type: ndcg_at_3
value: 37.75216666666667
- type: ndcg_at_5
value: 40.393833333333326
- type: precision_at_1
value: 31.94758333333333
- type: precision_at_10
value: 7.688916666666666
- type: precision_at_100
value: 1.2250833333333333
- type: precision_at_1000
value: 0.1595
- type: precision_at_3
value: 17.465999999999998
- type: precision_at_5
value: 12.548083333333333
- type: recall_at_1
value: 27.092083333333335
- type: recall_at_10
value: 56.286583333333326
- type: recall_at_100
value: 79.09033333333333
- type: recall_at_1000
value: 93.27483333333335
- type: recall_at_3
value: 41.35325
- type: recall_at_5
value: 48.072750000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.825
- type: map_at_10
value: 33.723
- type: map_at_100
value: 34.74
- type: map_at_1000
value: 34.824
- type: map_at_3
value: 31.369000000000003
- type: map_at_5
value: 32.533
- type: mrr_at_1
value: 29.293999999999997
- type: mrr_at_10
value: 36.84
- type: mrr_at_100
value: 37.681
- type: mrr_at_1000
value: 37.742
- type: mrr_at_3
value: 34.79
- type: mrr_at_5
value: 35.872
- type: ndcg_at_1
value: 29.293999999999997
- type: ndcg_at_10
value: 38.385999999999996
- type: ndcg_at_100
value: 43.327
- type: ndcg_at_1000
value: 45.53
- type: ndcg_at_3
value: 33.985
- type: ndcg_at_5
value: 35.817
- type: precision_at_1
value: 29.293999999999997
- type: precision_at_10
value: 6.12
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 14.621999999999998
- type: precision_at_5
value: 10.030999999999999
- type: recall_at_1
value: 25.825
- type: recall_at_10
value: 49.647000000000006
- type: recall_at_100
value: 72.32300000000001
- type: recall_at_1000
value: 88.62400000000001
- type: recall_at_3
value: 37.366
- type: recall_at_5
value: 41.957
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.139
- type: map_at_10
value: 26.107000000000003
- type: map_at_100
value: 27.406999999999996
- type: map_at_1000
value: 27.535999999999998
- type: map_at_3
value: 23.445
- type: map_at_5
value: 24.916
- type: mrr_at_1
value: 21.817
- type: mrr_at_10
value: 29.99
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.128
- type: mrr_at_3
value: 27.627000000000002
- type: mrr_at_5
value: 29.005
- type: ndcg_at_1
value: 21.817
- type: ndcg_at_10
value: 31.135
- type: ndcg_at_100
value: 37.108000000000004
- type: ndcg_at_1000
value: 39.965
- type: ndcg_at_3
value: 26.439
- type: ndcg_at_5
value: 28.655
- type: precision_at_1
value: 21.817
- type: precision_at_10
value: 5.757000000000001
- type: precision_at_100
value: 1.036
- type: precision_at_1000
value: 0.147
- type: precision_at_3
value: 12.537
- type: precision_at_5
value: 9.229
- type: recall_at_1
value: 18.139
- type: recall_at_10
value: 42.272999999999996
- type: recall_at_100
value: 68.657
- type: recall_at_1000
value: 88.93799999999999
- type: recall_at_3
value: 29.266
- type: recall_at_5
value: 34.892
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.755000000000003
- type: map_at_10
value: 37.384
- type: map_at_100
value: 38.56
- type: map_at_1000
value: 38.655
- type: map_at_3
value: 34.214
- type: map_at_5
value: 35.96
- type: mrr_at_1
value: 32.369
- type: mrr_at_10
value: 41.625
- type: mrr_at_100
value: 42.449
- type: mrr_at_1000
value: 42.502
- type: mrr_at_3
value: 38.899
- type: mrr_at_5
value: 40.489999999999995
- type: ndcg_at_1
value: 32.369
- type: ndcg_at_10
value: 43.287
- type: ndcg_at_100
value: 48.504999999999995
- type: ndcg_at_1000
value: 50.552
- type: ndcg_at_3
value: 37.549
- type: ndcg_at_5
value: 40.204
- type: precision_at_1
value: 32.369
- type: precision_at_10
value: 7.425
- type: precision_at_100
value: 1.134
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 17.102
- type: precision_at_5
value: 12.107999999999999
- type: recall_at_1
value: 27.755000000000003
- type: recall_at_10
value: 57.071000000000005
- type: recall_at_100
value: 79.456
- type: recall_at_1000
value: 93.54299999999999
- type: recall_at_3
value: 41.298
- type: recall_at_5
value: 48.037
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.855
- type: map_at_10
value: 34.53
- type: map_at_100
value: 36.167
- type: map_at_1000
value: 36.394999999999996
- type: map_at_3
value: 31.037
- type: map_at_5
value: 33.119
- type: mrr_at_1
value: 30.631999999999998
- type: mrr_at_10
value: 39.763999999999996
- type: mrr_at_100
value: 40.77
- type: mrr_at_1000
value: 40.826
- type: mrr_at_3
value: 36.495
- type: mrr_at_5
value: 38.561
- type: ndcg_at_1
value: 30.631999999999998
- type: ndcg_at_10
value: 40.942
- type: ndcg_at_100
value: 47.07
- type: ndcg_at_1000
value: 49.363
- type: ndcg_at_3
value: 35.038000000000004
- type: ndcg_at_5
value: 38.161
- type: precision_at_1
value: 30.631999999999998
- type: precision_at_10
value: 7.983999999999999
- type: precision_at_100
value: 1.6070000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 24.855
- type: recall_at_10
value: 53.291999999999994
- type: recall_at_100
value: 80.283
- type: recall_at_1000
value: 94.309
- type: recall_at_3
value: 37.257
- type: recall_at_5
value: 45.282
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.208
- type: map_at_10
value: 30.512
- type: map_at_100
value: 31.496000000000002
- type: map_at_1000
value: 31.595000000000002
- type: map_at_3
value: 27.904
- type: map_at_5
value: 29.491
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 32.379999999999995
- type: mrr_at_100
value: 33.245000000000005
- type: mrr_at_1000
value: 33.315
- type: mrr_at_3
value: 29.945
- type: mrr_at_5
value: 31.488
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 35.643
- type: ndcg_at_100
value: 40.535
- type: ndcg_at_1000
value: 43.042
- type: ndcg_at_3
value: 30.625000000000004
- type: ndcg_at_5
value: 33.323
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 13.431999999999999
- type: precision_at_5
value: 9.575
- type: recall_at_1
value: 21.208
- type: recall_at_10
value: 49.47
- type: recall_at_100
value: 71.71499999999999
- type: recall_at_1000
value: 90.55499999999999
- type: recall_at_3
value: 36.124
- type: recall_at_5
value: 42.606
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.363
- type: map_at_10
value: 20.312
- type: map_at_100
value: 22.225
- type: map_at_1000
value: 22.411
- type: map_at_3
value: 16.68
- type: map_at_5
value: 18.608
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 37.933
- type: mrr_at_100
value: 38.875
- type: mrr_at_1000
value: 38.911
- type: mrr_at_3
value: 34.387
- type: mrr_at_5
value: 36.51
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 28.82
- type: ndcg_at_100
value: 36.341
- type: ndcg_at_1000
value: 39.615
- type: ndcg_at_3
value: 23.01
- type: ndcg_at_5
value: 25.269000000000002
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 9.153
- type: precision_at_100
value: 1.7319999999999998
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 17.22
- type: precision_at_5
value: 13.629
- type: recall_at_1
value: 11.363
- type: recall_at_10
value: 35.382999999999996
- type: recall_at_100
value: 61.367000000000004
- type: recall_at_1000
value: 79.699
- type: recall_at_3
value: 21.495
- type: recall_at_5
value: 27.42
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.65
- type: map_at_10
value: 20.742
- type: map_at_100
value: 29.614
- type: map_at_1000
value: 31.373
- type: map_at_3
value: 14.667
- type: map_at_5
value: 17.186
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 76.762
- type: mrr_at_100
value: 77.171
- type: mrr_at_1000
value: 77.179
- type: mrr_at_3
value: 75.125
- type: mrr_at_5
value: 76.287
- type: ndcg_at_1
value: 57.62500000000001
- type: ndcg_at_10
value: 42.370999999999995
- type: ndcg_at_100
value: 47.897
- type: ndcg_at_1000
value: 55.393
- type: ndcg_at_3
value: 46.317
- type: ndcg_at_5
value: 43.906
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 33.95
- type: precision_at_100
value: 10.885
- type: precision_at_1000
value: 2.2239999999999998
- type: precision_at_3
value: 49.75
- type: precision_at_5
value: 42.3
- type: recall_at_1
value: 9.65
- type: recall_at_10
value: 26.117
- type: recall_at_100
value: 55.084
- type: recall_at_1000
value: 78.62400000000001
- type: recall_at_3
value: 15.823
- type: recall_at_5
value: 19.652
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.885
- type: f1
value: 42.99567641346983
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.97
- type: map_at_10
value: 80.34599999999999
- type: map_at_100
value: 80.571
- type: map_at_1000
value: 80.584
- type: map_at_3
value: 79.279
- type: map_at_5
value: 79.94
- type: mrr_at_1
value: 76.613
- type: mrr_at_10
value: 85.15700000000001
- type: mrr_at_100
value: 85.249
- type: mrr_at_1000
value: 85.252
- type: mrr_at_3
value: 84.33800000000001
- type: mrr_at_5
value: 84.89
- type: ndcg_at_1
value: 76.613
- type: ndcg_at_10
value: 84.53399999999999
- type: ndcg_at_100
value: 85.359
- type: ndcg_at_1000
value: 85.607
- type: ndcg_at_3
value: 82.76599999999999
- type: ndcg_at_5
value: 83.736
- type: precision_at_1
value: 76.613
- type: precision_at_10
value: 10.206
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 31.913000000000004
- type: precision_at_5
value: 19.769000000000002
- type: recall_at_1
value: 70.97
- type: recall_at_10
value: 92.674
- type: recall_at_100
value: 95.985
- type: recall_at_1000
value: 97.57000000000001
- type: recall_at_3
value: 87.742
- type: recall_at_5
value: 90.28
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.494
- type: map_at_10
value: 36.491
- type: map_at_100
value: 38.550000000000004
- type: map_at_1000
value: 38.726
- type: map_at_3
value: 31.807000000000002
- type: map_at_5
value: 34.299
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.146
- type: mrr_at_100
value: 54.013999999999996
- type: mrr_at_1000
value: 54.044000000000004
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.124
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 44.499
- type: ndcg_at_100
value: 51.629000000000005
- type: ndcg_at_1000
value: 54.367
- type: ndcg_at_3
value: 40.900999999999996
- type: ndcg_at_5
value: 41.737
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.346
- type: precision_at_100
value: 1.974
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 27.366
- type: precision_at_5
value: 19.846
- type: recall_at_1
value: 22.494
- type: recall_at_10
value: 51.156
- type: recall_at_100
value: 77.11200000000001
- type: recall_at_1000
value: 93.44
- type: recall_at_3
value: 36.574
- type: recall_at_5
value: 42.361
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.568999999999996
- type: map_at_10
value: 58.485
- type: map_at_100
value: 59.358999999999995
- type: map_at_1000
value: 59.429
- type: map_at_3
value: 55.217000000000006
- type: map_at_5
value: 57.236
- type: mrr_at_1
value: 77.137
- type: mrr_at_10
value: 82.829
- type: mrr_at_100
value: 83.04599999999999
- type: mrr_at_1000
value: 83.05399999999999
- type: mrr_at_3
value: 81.904
- type: mrr_at_5
value: 82.50800000000001
- type: ndcg_at_1
value: 77.137
- type: ndcg_at_10
value: 67.156
- type: ndcg_at_100
value: 70.298
- type: ndcg_at_1000
value: 71.65700000000001
- type: ndcg_at_3
value: 62.535
- type: ndcg_at_5
value: 65.095
- type: precision_at_1
value: 77.137
- type: precision_at_10
value: 13.911999999999999
- type: precision_at_100
value: 1.6389999999999998
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.572
- type: precision_at_5
value: 25.766
- type: recall_at_1
value: 38.568999999999996
- type: recall_at_10
value: 69.56099999999999
- type: recall_at_100
value: 81.931
- type: recall_at_1000
value: 90.91799999999999
- type: recall_at_3
value: 59.358999999999995
- type: recall_at_5
value: 64.416
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.45600000000002
- type: ap
value: 84.09725115338568
- type: f1
value: 88.41874909080512
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.404999999999998
- type: map_at_10
value: 33.921
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.164
- type: map_at_3
value: 30.043999999999997
- type: map_at_5
value: 32.327
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 34.505
- type: mrr_at_100
value: 35.638999999999996
- type: mrr_at_1000
value: 35.68
- type: mrr_at_3
value: 30.703999999999997
- type: mrr_at_5
value: 32.96
- type: ndcg_at_1
value: 21.963
- type: ndcg_at_10
value: 40.859
- type: ndcg_at_100
value: 46.614
- type: ndcg_at_1000
value: 47.789
- type: ndcg_at_3
value: 33.007999999999996
- type: ndcg_at_5
value: 37.084
- type: precision_at_1
value: 21.963
- type: precision_at_10
value: 6.493
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.155000000000001
- type: precision_at_5
value: 10.544
- type: recall_at_1
value: 21.404999999999998
- type: recall_at_10
value: 62.175000000000004
- type: recall_at_100
value: 88.786
- type: recall_at_1000
value: 97.738
- type: recall_at_3
value: 40.925
- type: recall_at_5
value: 50.722
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.50661194710442
- type: f1
value: 93.30311193153668
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.24669402644778
- type: f1
value: 54.23122108002977
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.61936785474109
- type: f1
value: 70.52644941025565
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.76529926025555
- type: f1
value: 77.26872729322514
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.39450293021839
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.757796879839294
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.62512146657428
- type: mrr
value: 33.84624322066173
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.462
- type: map_at_10
value: 14.947
- type: map_at_100
value: 19.344
- type: map_at_1000
value: 20.933
- type: map_at_3
value: 10.761999999999999
- type: map_at_5
value: 12.744
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.365
- type: mrr_at_100
value: 57.931
- type: mrr_at_1000
value: 57.96
- type: mrr_at_3
value: 54.85
- type: mrr_at_5
value: 56.569
- type: ndcg_at_1
value: 46.129999999999995
- type: ndcg_at_10
value: 38.173
- type: ndcg_at_100
value: 35.983
- type: ndcg_at_1000
value: 44.507000000000005
- type: ndcg_at_3
value: 42.495
- type: ndcg_at_5
value: 41.019
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 28.731
- type: precision_at_100
value: 9.232
- type: precision_at_1000
value: 2.202
- type: precision_at_3
value: 39.628
- type: precision_at_5
value: 35.851
- type: recall_at_1
value: 6.462
- type: recall_at_10
value: 18.968
- type: recall_at_100
value: 37.131
- type: recall_at_1000
value: 67.956
- type: recall_at_3
value: 11.905000000000001
- type: recall_at_5
value: 15.097
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.335
- type: map_at_10
value: 46.611999999999995
- type: map_at_100
value: 47.632000000000005
- type: map_at_1000
value: 47.661
- type: map_at_3
value: 41.876999999999995
- type: map_at_5
value: 44.799
- type: mrr_at_1
value: 34.125
- type: mrr_at_10
value: 49.01
- type: mrr_at_100
value: 49.75
- type: mrr_at_1000
value: 49.768
- type: mrr_at_3
value: 45.153
- type: mrr_at_5
value: 47.589999999999996
- type: ndcg_at_1
value: 34.125
- type: ndcg_at_10
value: 54.777
- type: ndcg_at_100
value: 58.914
- type: ndcg_at_1000
value: 59.521
- type: ndcg_at_3
value: 46.015
- type: ndcg_at_5
value: 50.861000000000004
- type: precision_at_1
value: 34.125
- type: precision_at_10
value: 9.166
- type: precision_at_100
value: 1.149
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 21.147
- type: precision_at_5
value: 15.469
- type: recall_at_1
value: 30.335
- type: recall_at_10
value: 77.194
- type: recall_at_100
value: 94.812
- type: recall_at_1000
value: 99.247
- type: recall_at_3
value: 54.681000000000004
- type: recall_at_5
value: 65.86800000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.62
- type: map_at_10
value: 84.536
- type: map_at_100
value: 85.167
- type: map_at_1000
value: 85.184
- type: map_at_3
value: 81.607
- type: map_at_5
value: 83.423
- type: mrr_at_1
value: 81.36
- type: mrr_at_10
value: 87.506
- type: mrr_at_100
value: 87.601
- type: mrr_at_1000
value: 87.601
- type: mrr_at_3
value: 86.503
- type: mrr_at_5
value: 87.179
- type: ndcg_at_1
value: 81.36
- type: ndcg_at_10
value: 88.319
- type: ndcg_at_100
value: 89.517
- type: ndcg_at_1000
value: 89.60900000000001
- type: ndcg_at_3
value: 85.423
- type: ndcg_at_5
value: 86.976
- type: precision_at_1
value: 81.36
- type: precision_at_10
value: 13.415
- type: precision_at_100
value: 1.529
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.342999999999996
- type: precision_at_5
value: 24.534
- type: recall_at_1
value: 70.62
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 99.624
- type: recall_at_1000
value: 99.991
- type: recall_at_3
value: 87.22
- type: recall_at_5
value: 91.654
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.826438478212744
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.24027467551447
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.997999999999999
- type: map_at_10
value: 14.267
- type: map_at_100
value: 16.843
- type: map_at_1000
value: 17.229
- type: map_at_3
value: 9.834
- type: map_at_5
value: 11.92
- type: mrr_at_1
value: 24.7
- type: mrr_at_10
value: 37.685
- type: mrr_at_100
value: 38.704
- type: mrr_at_1000
value: 38.747
- type: mrr_at_3
value: 34.150000000000006
- type: mrr_at_5
value: 36.075
- type: ndcg_at_1
value: 24.7
- type: ndcg_at_10
value: 23.44
- type: ndcg_at_100
value: 32.617000000000004
- type: ndcg_at_1000
value: 38.628
- type: ndcg_at_3
value: 21.747
- type: ndcg_at_5
value: 19.076
- type: precision_at_1
value: 24.7
- type: precision_at_10
value: 12.47
- type: precision_at_100
value: 2.564
- type: precision_at_1000
value: 0.4
- type: precision_at_3
value: 20.767
- type: precision_at_5
value: 17.06
- type: recall_at_1
value: 4.997999999999999
- type: recall_at_10
value: 25.3
- type: recall_at_100
value: 52.048
- type: recall_at_1000
value: 81.093
- type: recall_at_3
value: 12.642999999999999
- type: recall_at_5
value: 17.312
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.44942006292234
- type: cos_sim_spearman
value: 79.80930790660699
- type: euclidean_pearson
value: 82.93400777494863
- type: euclidean_spearman
value: 80.04664991110705
- type: manhattan_pearson
value: 82.93551681854949
- type: manhattan_spearman
value: 80.03156736837379
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.63574059135726
- type: cos_sim_spearman
value: 76.80552915288186
- type: euclidean_pearson
value: 82.46368529820518
- type: euclidean_spearman
value: 76.60338474719275
- type: manhattan_pearson
value: 82.4558617035968
- type: manhattan_spearman
value: 76.57936082895705
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.24116811084211
- type: cos_sim_spearman
value: 88.10998662068769
- type: euclidean_pearson
value: 87.04961732352689
- type: euclidean_spearman
value: 88.12543945864087
- type: manhattan_pearson
value: 86.9905224528854
- type: manhattan_spearman
value: 88.07827944705546
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.74847296555048
- type: cos_sim_spearman
value: 82.66200957916445
- type: euclidean_pearson
value: 84.48132256004965
- type: euclidean_spearman
value: 82.67915286000596
- type: manhattan_pearson
value: 84.44950477268334
- type: manhattan_spearman
value: 82.63327639173352
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.23056258027053
- type: cos_sim_spearman
value: 88.92791680286955
- type: euclidean_pearson
value: 88.13819235461933
- type: euclidean_spearman
value: 88.87294661361716
- type: manhattan_pearson
value: 88.14212133687899
- type: manhattan_spearman
value: 88.88551854529777
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.64179522732887
- type: cos_sim_spearman
value: 84.25028809903114
- type: euclidean_pearson
value: 83.40175015236979
- type: euclidean_spearman
value: 84.23369296429406
- type: manhattan_pearson
value: 83.43768174261321
- type: manhattan_spearman
value: 84.27855229214734
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.20378955494732
- type: cos_sim_spearman
value: 88.46863559173111
- type: euclidean_pearson
value: 88.8249295811663
- type: euclidean_spearman
value: 88.6312737724905
- type: manhattan_pearson
value: 88.87744466378827
- type: manhattan_spearman
value: 88.82908423767314
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.91342028796086
- type: cos_sim_spearman
value: 69.71495021867864
- type: euclidean_pearson
value: 70.65334330405646
- type: euclidean_spearman
value: 69.4321253472211
- type: manhattan_pearson
value: 70.59743494727465
- type: manhattan_spearman
value: 69.11695509297482
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.42451709766952
- type: cos_sim_spearman
value: 86.07166710670508
- type: euclidean_pearson
value: 86.12711421258899
- type: euclidean_spearman
value: 86.05232086925126
- type: manhattan_pearson
value: 86.15591089932126
- type: manhattan_spearman
value: 86.0890128623439
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.1976344717285
- type: mrr
value: 96.3703145075694
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.511
- type: map_at_10
value: 69.724
- type: map_at_100
value: 70.208
- type: map_at_1000
value: 70.22800000000001
- type: map_at_3
value: 66.986
- type: map_at_5
value: 68.529
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.55
- type: mrr_at_100
value: 70.985
- type: mrr_at_1000
value: 71.004
- type: mrr_at_3
value: 68.611
- type: mrr_at_5
value: 69.728
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 74.265
- type: ndcg_at_100
value: 76.361
- type: ndcg_at_1000
value: 76.82900000000001
- type: ndcg_at_3
value: 69.772
- type: ndcg_at_5
value: 71.94800000000001
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.444000000000003
- type: precision_at_5
value: 18
- type: recall_at_1
value: 59.511
- type: recall_at_10
value: 87.156
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 75.2
- type: recall_at_5
value: 80.661
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81683168316832
- type: cos_sim_ap
value: 95.74716566563774
- type: cos_sim_f1
value: 90.64238745574103
- type: cos_sim_precision
value: 91.7093142272262
- type: cos_sim_recall
value: 89.60000000000001
- type: dot_accuracy
value: 99.69405940594059
- type: dot_ap
value: 91.09013507754594
- type: dot_f1
value: 84.54227113556779
- type: dot_precision
value: 84.58458458458459
- type: dot_recall
value: 84.5
- type: euclidean_accuracy
value: 99.81782178217821
- type: euclidean_ap
value: 95.6324301072609
- type: euclidean_f1
value: 90.58341862845445
- type: euclidean_precision
value: 92.76729559748428
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.81980198019802
- type: manhattan_ap
value: 95.68510494437183
- type: manhattan_f1
value: 90.58945191313342
- type: manhattan_precision
value: 93.79014989293361
- type: manhattan_recall
value: 87.6
- type: max_accuracy
value: 99.81980198019802
- type: max_ap
value: 95.74716566563774
- type: max_f1
value: 90.64238745574103
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.63761899427078
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.572473369697235
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.63000245208579
- type: mrr
value: 54.504193722943725
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.300791939416545
- type: cos_sim_spearman
value: 31.662904057924123
- type: dot_pearson
value: 26.21198530758316
- type: dot_spearman
value: 27.006921548904263
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.197
- type: map_at_10
value: 1.752
- type: map_at_100
value: 10.795
- type: map_at_1000
value: 27.18
- type: map_at_3
value: 0.5890000000000001
- type: map_at_5
value: 0.938
- type: mrr_at_1
value: 74
- type: mrr_at_10
value: 85.833
- type: mrr_at_100
value: 85.833
- type: mrr_at_1000
value: 85.833
- type: mrr_at_3
value: 85.333
- type: mrr_at_5
value: 85.833
- type: ndcg_at_1
value: 69
- type: ndcg_at_10
value: 70.22
- type: ndcg_at_100
value: 55.785
- type: ndcg_at_1000
value: 52.93600000000001
- type: ndcg_at_3
value: 72.084
- type: ndcg_at_5
value: 71.184
- type: precision_at_1
value: 74
- type: precision_at_10
value: 75.2
- type: precision_at_100
value: 57.3
- type: precision_at_1000
value: 23.302
- type: precision_at_3
value: 77.333
- type: precision_at_5
value: 75.6
- type: recall_at_1
value: 0.197
- type: recall_at_10
value: 2.019
- type: recall_at_100
value: 14.257
- type: recall_at_1000
value: 50.922
- type: recall_at_3
value: 0.642
- type: recall_at_5
value: 1.043
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.803
- type: map_at_10
value: 10.407
- type: map_at_100
value: 16.948
- type: map_at_1000
value: 18.424
- type: map_at_3
value: 5.405
- type: map_at_5
value: 6.908
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 50.221000000000004
- type: mrr_at_100
value: 51.388
- type: mrr_at_1000
value: 51.402
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.626
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 25.507
- type: ndcg_at_100
value: 38.296
- type: ndcg_at_1000
value: 49.492000000000004
- type: ndcg_at_3
value: 29.006999999999998
- type: ndcg_at_5
value: 25.979000000000003
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 22.041
- type: precision_at_100
value: 8.02
- type: precision_at_1000
value: 1.567
- type: precision_at_3
value: 28.571
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.803
- type: recall_at_10
value: 16.378
- type: recall_at_100
value: 50.489
- type: recall_at_1000
value: 85.013
- type: recall_at_3
value: 6.505
- type: recall_at_5
value: 9.243
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.55579999999999
- type: ap
value: 14.206982753316227
- type: f1
value: 54.372142814964285
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.57611771363893
- type: f1
value: 56.924172639063144
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 52.82304915719759
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.92716218632653
- type: cos_sim_ap
value: 73.73359122546046
- type: cos_sim_f1
value: 68.42559487116262
- type: cos_sim_precision
value: 64.22124508215691
- type: cos_sim_recall
value: 73.21899736147758
- type: dot_accuracy
value: 80.38981939560112
- type: dot_ap
value: 54.61060862444974
- type: dot_f1
value: 53.45710627400769
- type: dot_precision
value: 44.87638839125761
- type: dot_recall
value: 66.09498680738787
- type: euclidean_accuracy
value: 86.02849138701794
- type: euclidean_ap
value: 73.95673761922404
- type: euclidean_f1
value: 68.6783042394015
- type: euclidean_precision
value: 65.1063829787234
- type: euclidean_recall
value: 72.66490765171504
- type: manhattan_accuracy
value: 85.9808070572808
- type: manhattan_ap
value: 73.9050720058029
- type: manhattan_f1
value: 68.57560618983794
- type: manhattan_precision
value: 63.70839936608558
- type: manhattan_recall
value: 74.24802110817942
- type: max_accuracy
value: 86.02849138701794
- type: max_ap
value: 73.95673761922404
- type: max_f1
value: 68.6783042394015
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.72783017037295
- type: cos_sim_ap
value: 85.52705223340233
- type: cos_sim_f1
value: 77.91659078492079
- type: cos_sim_precision
value: 73.93378032764221
- type: cos_sim_recall
value: 82.35294117647058
- type: dot_accuracy
value: 85.41739434159972
- type: dot_ap
value: 77.17734818118443
- type: dot_f1
value: 71.63473589973144
- type: dot_precision
value: 66.96123719622415
- type: dot_recall
value: 77.00954727440714
- type: euclidean_accuracy
value: 88.68125897465751
- type: euclidean_ap
value: 85.47712213906692
- type: euclidean_f1
value: 77.81419950830664
- type: euclidean_precision
value: 75.37162649733006
- type: euclidean_recall
value: 80.42038805050817
- type: manhattan_accuracy
value: 88.67349710870494
- type: manhattan_ap
value: 85.46506475241955
- type: manhattan_f1
value: 77.87259084890393
- type: manhattan_precision
value: 74.54929577464789
- type: manhattan_recall
value: 81.50600554357868
- type: max_accuracy
value: 88.72783017037295
- type: max_ap
value: 85.52705223340233
- type: max_f1
value: 77.91659078492079
language:
- en
license: mit
---
# gte-large
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Metrics
We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
| [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
| [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
## Usage
Code example
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-large")
model = AutoModel.from_pretrained("thenlper/gte-large")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('thenlper/gte-large')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
### Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
``` |
sanekalas/t5-hw3-shumovav | sanekalas | 2025-05-27T19:23:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-27T19:23:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Moklemok/CodeSpace | Moklemok | 2025-05-27T19:21:03Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
]
| null | 2025-05-27T19:21:03Z | ---
license: bigcode-openrail-m
---
|
Lubna-qureshi-Hd/lubna.qureshi.viral.video.HOT.NEws.Today.Trending.Latest.Video | Lubna-qureshi-Hd | 2025-05-27T19:20:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:17:26Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Lubna-qureshi)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Lubna-qureshi)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Lubna-qureshi) |
jobz-hunting/wATCH.Jobz.Hunting.Sajal.Malik.viral.video.original | jobz-hunting | 2025-05-27T19:20:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:20:17Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?jobz-hunting)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting) |
ErikCikalleshi/alpaca_lora_model | ErikCikalleshi | 2025-05-27T19:19:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T19:35:59Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ErikCikalleshi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aamijar/Llama-2-7b-hf-lora-r1024-boolq-portlora-epochs7 | aamijar | 2025-05-27T19:19:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T19:19:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
milpu02/Akkgsykmix | milpu02 | 2025-05-27T19:18:07Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2025-05-27T19:17:54Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Akkgsyk
output:
url: images/pixai-1845374734374697074-2.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Akkgsyk
---
# sdxl
<Gallery />
## Trigger words
You should use `Akkgsyk` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/milpu02/Akkgsykmix/tree/main) them in the Files & versions tab.
|
gradientrouting-spar/medical_task_qwen_3_8b_ft_trainers_seed_42_epoch_1 | gradientrouting-spar | 2025-05-27T19:17:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T19:15:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Videos-CCTV-wiring-cikgu-viral-clip/Original.Bocor.Video.CCTV.wiring.cikgu.video.nur.fadhilah.binti.zainal.guru.part.2.video | Videos-CCTV-wiring-cikgu-viral-clip | 2025-05-27T19:17:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:16:57Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
flaviawallen/MNLP_M2_document_encoder | flaviawallen | 2025-05-27T19:15:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"sentence-similarity",
"en",
"arxiv:2402.16829",
"arxiv:2212.09741",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-27T19:14:26Z | ---
language:
- en
library_name: sentence-transformers
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- mteb
- sentence-similarity
- sentence-transformers
model-index:
- name: GIST-small-Embedding-v0
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.26865671641791
- type: ap
value: 38.25623793370476
- type: f1
value: 69.26434651320257
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.232225
- type: ap
value: 89.97936072879344
- type: f1
value: 93.22122653806187
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.715999999999994
- type: f1
value: 49.169789920136076
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.922
- type: map_at_10
value: 50.524
- type: map_at_100
value: 51.247
- type: map_at_1000
value: 51.249
- type: map_at_3
value: 45.887
- type: map_at_5
value: 48.592999999999996
- type: mrr_at_1
value: 34.922
- type: mrr_at_10
value: 50.382000000000005
- type: mrr_at_100
value: 51.104000000000006
- type: mrr_at_1000
value: 51.105999999999995
- type: mrr_at_3
value: 45.733000000000004
- type: mrr_at_5
value: 48.428
- type: ndcg_at_1
value: 34.922
- type: ndcg_at_10
value: 59.12
- type: ndcg_at_100
value: 62.083999999999996
- type: ndcg_at_1000
value: 62.137
- type: ndcg_at_3
value: 49.616
- type: ndcg_at_5
value: 54.501
- type: precision_at_1
value: 34.922
- type: precision_at_10
value: 8.649
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.152
- type: precision_at_5
value: 14.466999999999999
- type: recall_at_1
value: 34.922
- type: recall_at_10
value: 86.48599999999999
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 60.455000000000005
- type: recall_at_5
value: 72.333
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.623282347623714
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.86487843524932
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.3290291318171
- type: mrr
value: 75.2379853141626
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.52002953574285
- type: cos_sim_spearman
value: 86.98752423842483
- type: euclidean_pearson
value: 86.89442688314197
- type: euclidean_spearman
value: 86.88631711307471
- type: manhattan_pearson
value: 87.03723618507175
- type: manhattan_spearman
value: 86.76041062975224
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.64935064935065
- type: f1
value: 86.61903824934998
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.21904455377494
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.43342755570654
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.843
- type: map_at_10
value: 43.379
- type: map_at_100
value: 44.946999999999996
- type: map_at_1000
value: 45.078
- type: map_at_3
value: 39.598
- type: map_at_5
value: 41.746
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.672
- type: mrr_at_100
value: 50.321000000000005
- type: mrr_at_1000
value: 50.365
- type: mrr_at_3
value: 46.805
- type: mrr_at_5
value: 48.579
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.163999999999994
- type: ndcg_at_100
value: 55.418
- type: ndcg_at_1000
value: 57.353
- type: ndcg_at_3
value: 44.716
- type: ndcg_at_5
value: 47.268
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.757
- type: precision_at_100
value: 1.552
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 21.602
- type: precision_at_5
value: 15.479000000000001
- type: recall_at_1
value: 31.843
- type: recall_at_10
value: 62.743
- type: recall_at_100
value: 84.78099999999999
- type: recall_at_1000
value: 96.86099999999999
- type: recall_at_3
value: 46.927
- type: recall_at_5
value: 54.355
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.321
- type: map_at_10
value: 39.062999999999995
- type: map_at_100
value: 40.403
- type: map_at_1000
value: 40.534
- type: map_at_3
value: 36.367
- type: map_at_5
value: 37.756
- type: mrr_at_1
value: 35.987
- type: mrr_at_10
value: 44.708999999999996
- type: mrr_at_100
value: 45.394
- type: mrr_at_1000
value: 45.436
- type: mrr_at_3
value: 42.463
- type: mrr_at_5
value: 43.663000000000004
- type: ndcg_at_1
value: 35.987
- type: ndcg_at_10
value: 44.585
- type: ndcg_at_100
value: 49.297999999999995
- type: ndcg_at_1000
value: 51.315
- type: ndcg_at_3
value: 40.569
- type: ndcg_at_5
value: 42.197
- type: precision_at_1
value: 35.987
- type: precision_at_10
value: 8.369
- type: precision_at_100
value: 1.366
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 19.427
- type: precision_at_5
value: 13.58
- type: recall_at_1
value: 29.321
- type: recall_at_10
value: 54.333
- type: recall_at_100
value: 74.178
- type: recall_at_1000
value: 86.732
- type: recall_at_3
value: 42.46
- type: recall_at_5
value: 47.089999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.811
- type: map_at_10
value: 51.114000000000004
- type: map_at_100
value: 52.22
- type: map_at_1000
value: 52.275000000000006
- type: map_at_3
value: 47.644999999999996
- type: map_at_5
value: 49.675000000000004
- type: mrr_at_1
value: 44.389
- type: mrr_at_10
value: 54.459
- type: mrr_at_100
value: 55.208999999999996
- type: mrr_at_1000
value: 55.239000000000004
- type: mrr_at_3
value: 51.954
- type: mrr_at_5
value: 53.571999999999996
- type: ndcg_at_1
value: 44.389
- type: ndcg_at_10
value: 56.979
- type: ndcg_at_100
value: 61.266
- type: ndcg_at_1000
value: 62.315
- type: ndcg_at_3
value: 51.342
- type: ndcg_at_5
value: 54.33
- type: precision_at_1
value: 44.389
- type: precision_at_10
value: 9.26
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 22.926
- type: precision_at_5
value: 15.987000000000002
- type: recall_at_1
value: 38.811
- type: recall_at_10
value: 70.841
- type: recall_at_100
value: 89.218
- type: recall_at_1000
value: 96.482
- type: recall_at_3
value: 56.123999999999995
- type: recall_at_5
value: 63.322
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.378
- type: map_at_10
value: 34.311
- type: map_at_100
value: 35.399
- type: map_at_1000
value: 35.482
- type: map_at_3
value: 31.917
- type: map_at_5
value: 33.275
- type: mrr_at_1
value: 27.683999999999997
- type: mrr_at_10
value: 36.575
- type: mrr_at_100
value: 37.492
- type: mrr_at_1000
value: 37.556
- type: mrr_at_3
value: 34.35
- type: mrr_at_5
value: 35.525
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 39.247
- type: ndcg_at_100
value: 44.424
- type: ndcg_at_1000
value: 46.478
- type: ndcg_at_3
value: 34.684
- type: ndcg_at_5
value: 36.886
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 5.989
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.84
- type: precision_at_5
value: 10.215
- type: recall_at_1
value: 25.378
- type: recall_at_10
value: 52.195
- type: recall_at_100
value: 75.764
- type: recall_at_1000
value: 91.012
- type: recall_at_3
value: 39.885999999999996
- type: recall_at_5
value: 45.279
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.326
- type: map_at_10
value: 25.247000000000003
- type: map_at_100
value: 26.473000000000003
- type: map_at_1000
value: 26.579000000000004
- type: map_at_3
value: 22.466
- type: map_at_5
value: 24.113
- type: mrr_at_1
value: 21.393
- type: mrr_at_10
value: 30.187
- type: mrr_at_100
value: 31.089
- type: mrr_at_1000
value: 31.15
- type: mrr_at_3
value: 27.279999999999998
- type: mrr_at_5
value: 29.127
- type: ndcg_at_1
value: 21.393
- type: ndcg_at_10
value: 30.668
- type: ndcg_at_100
value: 36.543
- type: ndcg_at_1000
value: 39.181
- type: ndcg_at_3
value: 25.552000000000003
- type: ndcg_at_5
value: 28.176000000000002
- type: precision_at_1
value: 21.393
- type: precision_at_10
value: 5.784000000000001
- type: precision_at_100
value: 1.001
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 12.231
- type: precision_at_5
value: 9.179
- type: recall_at_1
value: 17.326
- type: recall_at_10
value: 42.415000000000006
- type: recall_at_100
value: 68.605
- type: recall_at_1000
value: 87.694
- type: recall_at_3
value: 28.343
- type: recall_at_5
value: 35.086
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.069
- type: map_at_10
value: 40.027
- type: map_at_100
value: 41.308
- type: map_at_1000
value: 41.412
- type: map_at_3
value: 36.864000000000004
- type: map_at_5
value: 38.641999999999996
- type: mrr_at_1
value: 35.707
- type: mrr_at_10
value: 45.527
- type: mrr_at_100
value: 46.348
- type: mrr_at_1000
value: 46.392
- type: mrr_at_3
value: 43.086
- type: mrr_at_5
value: 44.645
- type: ndcg_at_1
value: 35.707
- type: ndcg_at_10
value: 46.117000000000004
- type: ndcg_at_100
value: 51.468
- type: ndcg_at_1000
value: 53.412000000000006
- type: ndcg_at_3
value: 41.224
- type: ndcg_at_5
value: 43.637
- type: precision_at_1
value: 35.707
- type: precision_at_10
value: 8.459999999999999
- type: precision_at_100
value: 1.2970000000000002
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 19.731
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 29.069
- type: recall_at_10
value: 58.343999999999994
- type: recall_at_100
value: 81.296
- type: recall_at_1000
value: 93.974
- type: recall_at_3
value: 44.7
- type: recall_at_5
value: 50.88700000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.905
- type: map_at_10
value: 33.983000000000004
- type: map_at_100
value: 35.372
- type: map_at_1000
value: 35.487
- type: map_at_3
value: 30.902
- type: map_at_5
value: 32.505
- type: mrr_at_1
value: 29.794999999999998
- type: mrr_at_10
value: 39.28
- type: mrr_at_100
value: 40.215
- type: mrr_at_1000
value: 40.276
- type: mrr_at_3
value: 36.701
- type: mrr_at_5
value: 38.105
- type: ndcg_at_1
value: 29.794999999999998
- type: ndcg_at_10
value: 40.041
- type: ndcg_at_100
value: 45.884
- type: ndcg_at_1000
value: 48.271
- type: ndcg_at_3
value: 34.931
- type: ndcg_at_5
value: 37.044
- type: precision_at_1
value: 29.794999999999998
- type: precision_at_10
value: 7.546
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.933
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 23.905
- type: recall_at_10
value: 52.945
- type: recall_at_100
value: 77.551
- type: recall_at_1000
value: 93.793
- type: recall_at_3
value: 38.364
- type: recall_at_5
value: 44.044
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.24441666666667
- type: map_at_10
value: 34.4595
- type: map_at_100
value: 35.699999999999996
- type: map_at_1000
value: 35.8155
- type: map_at_3
value: 31.608333333333338
- type: map_at_5
value: 33.189416666666666
- type: mrr_at_1
value: 29.825250000000004
- type: mrr_at_10
value: 38.60875
- type: mrr_at_100
value: 39.46575
- type: mrr_at_1000
value: 39.52458333333333
- type: mrr_at_3
value: 36.145166666666675
- type: mrr_at_5
value: 37.57625
- type: ndcg_at_1
value: 29.825250000000004
- type: ndcg_at_10
value: 39.88741666666667
- type: ndcg_at_100
value: 45.17966666666667
- type: ndcg_at_1000
value: 47.440583333333336
- type: ndcg_at_3
value: 35.04591666666666
- type: ndcg_at_5
value: 37.32025
- type: precision_at_1
value: 29.825250000000004
- type: precision_at_10
value: 7.07225
- type: precision_at_100
value: 1.1462499999999998
- type: precision_at_1000
value: 0.15325
- type: precision_at_3
value: 16.18375
- type: precision_at_5
value: 11.526833333333334
- type: recall_at_1
value: 25.24441666666667
- type: recall_at_10
value: 51.744916666666676
- type: recall_at_100
value: 75.04574999999998
- type: recall_at_1000
value: 90.65558333333334
- type: recall_at_3
value: 38.28349999999999
- type: recall_at_5
value: 44.16591666666667
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.237000000000002
- type: map_at_10
value: 30.667
- type: map_at_100
value: 31.592
- type: map_at_1000
value: 31.688
- type: map_at_3
value: 28.810999999999996
- type: map_at_5
value: 29.788999999999998
- type: mrr_at_1
value: 26.840000000000003
- type: mrr_at_10
value: 33.305
- type: mrr_at_100
value: 34.089000000000006
- type: mrr_at_1000
value: 34.159
- type: mrr_at_3
value: 31.518
- type: mrr_at_5
value: 32.469
- type: ndcg_at_1
value: 26.840000000000003
- type: ndcg_at_10
value: 34.541
- type: ndcg_at_100
value: 39.206
- type: ndcg_at_1000
value: 41.592
- type: ndcg_at_3
value: 31.005
- type: ndcg_at_5
value: 32.554
- type: precision_at_1
value: 26.840000000000003
- type: precision_at_10
value: 5.3069999999999995
- type: precision_at_100
value: 0.8340000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.292000000000002
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 24.237000000000002
- type: recall_at_10
value: 43.862
- type: recall_at_100
value: 65.352
- type: recall_at_1000
value: 82.704
- type: recall_at_3
value: 34.009
- type: recall_at_5
value: 37.878
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.482
- type: map_at_10
value: 23.249
- type: map_at_100
value: 24.388
- type: map_at_1000
value: 24.519
- type: map_at_3
value: 20.971
- type: map_at_5
value: 22.192
- type: mrr_at_1
value: 19.993
- type: mrr_at_10
value: 26.985
- type: mrr_at_100
value: 27.975
- type: mrr_at_1000
value: 28.052
- type: mrr_at_3
value: 24.954
- type: mrr_at_5
value: 26.070999999999998
- type: ndcg_at_1
value: 19.993
- type: ndcg_at_10
value: 27.656
- type: ndcg_at_100
value: 33.256
- type: ndcg_at_1000
value: 36.275
- type: ndcg_at_3
value: 23.644000000000002
- type: ndcg_at_5
value: 25.466
- type: precision_at_1
value: 19.993
- type: precision_at_10
value: 5.093
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 11.149000000000001
- type: precision_at_5
value: 8.149000000000001
- type: recall_at_1
value: 16.482
- type: recall_at_10
value: 37.141999999999996
- type: recall_at_100
value: 62.696
- type: recall_at_1000
value: 84.333
- type: recall_at_3
value: 26.031
- type: recall_at_5
value: 30.660999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.887999999999998
- type: map_at_10
value: 34.101
- type: map_at_100
value: 35.27
- type: map_at_1000
value: 35.370000000000005
- type: map_at_3
value: 31.283
- type: map_at_5
value: 32.72
- type: mrr_at_1
value: 29.011
- type: mrr_at_10
value: 38.004
- type: mrr_at_100
value: 38.879000000000005
- type: mrr_at_1000
value: 38.938
- type: mrr_at_3
value: 35.571999999999996
- type: mrr_at_5
value: 36.789
- type: ndcg_at_1
value: 29.011
- type: ndcg_at_10
value: 39.586
- type: ndcg_at_100
value: 44.939
- type: ndcg_at_1000
value: 47.236
- type: ndcg_at_3
value: 34.4
- type: ndcg_at_5
value: 36.519
- type: precision_at_1
value: 29.011
- type: precision_at_10
value: 6.763
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.609
- type: precision_at_5
value: 10.896
- type: recall_at_1
value: 24.887999999999998
- type: recall_at_10
value: 52.42
- type: recall_at_100
value: 75.803
- type: recall_at_1000
value: 91.725
- type: recall_at_3
value: 38.080999999999996
- type: recall_at_5
value: 43.47
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.953
- type: map_at_10
value: 32.649
- type: map_at_100
value: 34.181
- type: map_at_1000
value: 34.398
- type: map_at_3
value: 29.567
- type: map_at_5
value: 31.263
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 37.419999999999995
- type: mrr_at_100
value: 38.396
- type: mrr_at_1000
value: 38.458
- type: mrr_at_3
value: 34.782999999999994
- type: mrr_at_5
value: 36.254999999999995
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 38.595
- type: ndcg_at_100
value: 44.6
- type: ndcg_at_1000
value: 47.158
- type: ndcg_at_3
value: 33.56
- type: ndcg_at_5
value: 35.870000000000005
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.53
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 15.744
- type: precision_at_5
value: 11.542
- type: recall_at_1
value: 23.953
- type: recall_at_10
value: 50.08200000000001
- type: recall_at_100
value: 77.364
- type: recall_at_1000
value: 93.57799999999999
- type: recall_at_3
value: 35.432
- type: recall_at_5
value: 41.875
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.72
- type: map_at_10
value: 25.724000000000004
- type: map_at_100
value: 26.846999999999998
- type: map_at_1000
value: 26.964
- type: map_at_3
value: 22.909
- type: map_at_5
value: 24.596999999999998
- type: mrr_at_1
value: 18.854000000000003
- type: mrr_at_10
value: 27.182000000000002
- type: mrr_at_100
value: 28.182000000000002
- type: mrr_at_1000
value: 28.274
- type: mrr_at_3
value: 24.276
- type: mrr_at_5
value: 26.115
- type: ndcg_at_1
value: 18.854000000000003
- type: ndcg_at_10
value: 30.470000000000002
- type: ndcg_at_100
value: 35.854
- type: ndcg_at_1000
value: 38.701
- type: ndcg_at_3
value: 24.924
- type: ndcg_at_5
value: 27.895999999999997
- type: precision_at_1
value: 18.854000000000003
- type: precision_at_10
value: 5.009
- type: precision_at_100
value: 0.835
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 10.721
- type: precision_at_5
value: 8.133
- type: recall_at_1
value: 17.72
- type: recall_at_10
value: 43.617
- type: recall_at_100
value: 67.941
- type: recall_at_1000
value: 88.979
- type: recall_at_3
value: 29.044999999999998
- type: recall_at_5
value: 36.044
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.427
- type: map_at_10
value: 22.935
- type: map_at_100
value: 24.808
- type: map_at_1000
value: 24.994
- type: map_at_3
value: 19.533
- type: map_at_5
value: 21.261
- type: mrr_at_1
value: 30.945
- type: mrr_at_10
value: 43.242000000000004
- type: mrr_at_100
value: 44.013999999999996
- type: mrr_at_1000
value: 44.048
- type: mrr_at_3
value: 40.109
- type: mrr_at_5
value: 42.059999999999995
- type: ndcg_at_1
value: 30.945
- type: ndcg_at_10
value: 31.828
- type: ndcg_at_100
value: 38.801
- type: ndcg_at_1000
value: 42.126999999999995
- type: ndcg_at_3
value: 26.922
- type: ndcg_at_5
value: 28.483999999999998
- type: precision_at_1
value: 30.945
- type: precision_at_10
value: 9.844
- type: precision_at_100
value: 1.7309999999999999
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 20.477999999999998
- type: precision_at_5
value: 15.27
- type: recall_at_1
value: 13.427
- type: recall_at_10
value: 37.141000000000005
- type: recall_at_100
value: 61.007
- type: recall_at_1000
value: 79.742
- type: recall_at_3
value: 24.431
- type: recall_at_5
value: 29.725
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.122
- type: map_at_10
value: 18.799
- type: map_at_100
value: 25.724999999999998
- type: map_at_1000
value: 27.205000000000002
- type: map_at_3
value: 14.194999999999999
- type: map_at_5
value: 16.225
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 76.035
- type: mrr_at_100
value: 76.292
- type: mrr_at_1000
value: 76.297
- type: mrr_at_3
value: 74.458
- type: mrr_at_5
value: 75.558
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 39.761
- type: ndcg_at_100
value: 43.736999999999995
- type: ndcg_at_1000
value: 51.146
- type: ndcg_at_3
value: 45.921
- type: ndcg_at_5
value: 42.756
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 30.275000000000002
- type: precision_at_100
value: 9.343
- type: precision_at_1000
value: 1.8270000000000002
- type: precision_at_3
value: 49.167
- type: precision_at_5
value: 40.699999999999996
- type: recall_at_1
value: 9.122
- type: recall_at_10
value: 23.669999999999998
- type: recall_at_100
value: 48.719
- type: recall_at_1000
value: 72.033
- type: recall_at_3
value: 15.498999999999999
- type: recall_at_5
value: 18.657
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 55.885000000000005
- type: f1
value: 50.70726446938571
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 75.709
- type: map_at_10
value: 83.345
- type: map_at_100
value: 83.557
- type: map_at_1000
value: 83.572
- type: map_at_3
value: 82.425
- type: map_at_5
value: 83.013
- type: mrr_at_1
value: 81.593
- type: mrr_at_10
value: 88.331
- type: mrr_at_100
value: 88.408
- type: mrr_at_1000
value: 88.41
- type: mrr_at_3
value: 87.714
- type: mrr_at_5
value: 88.122
- type: ndcg_at_1
value: 81.593
- type: ndcg_at_10
value: 86.925
- type: ndcg_at_100
value: 87.67
- type: ndcg_at_1000
value: 87.924
- type: ndcg_at_3
value: 85.5
- type: ndcg_at_5
value: 86.283
- type: precision_at_1
value: 81.593
- type: precision_at_10
value: 10.264
- type: precision_at_100
value: 1.084
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 32.388
- type: precision_at_5
value: 19.991
- type: recall_at_1
value: 75.709
- type: recall_at_10
value: 93.107
- type: recall_at_100
value: 96.024
- type: recall_at_1000
value: 97.603
- type: recall_at_3
value: 89.08500000000001
- type: recall_at_5
value: 91.15299999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.121
- type: map_at_10
value: 31.78
- type: map_at_100
value: 33.497
- type: map_at_1000
value: 33.696
- type: map_at_3
value: 27.893
- type: map_at_5
value: 30.087000000000003
- type: mrr_at_1
value: 38.272
- type: mrr_at_10
value: 47.176
- type: mrr_at_100
value: 48.002
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 45.086999999999996
- type: mrr_at_5
value: 46.337
- type: ndcg_at_1
value: 38.272
- type: ndcg_at_10
value: 39.145
- type: ndcg_at_100
value: 45.696999999999996
- type: ndcg_at_1000
value: 49.0
- type: ndcg_at_3
value: 36.148
- type: ndcg_at_5
value: 37.023
- type: precision_at_1
value: 38.272
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.7840000000000003
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.056
- type: recall_at_1
value: 19.121
- type: recall_at_10
value: 44.857
- type: recall_at_100
value: 69.774
- type: recall_at_1000
value: 89.645
- type: recall_at_3
value: 32.588
- type: recall_at_5
value: 37.939
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.428
- type: map_at_10
value: 56.891999999999996
- type: map_at_100
value: 57.82899999999999
- type: map_at_1000
value: 57.896
- type: map_at_3
value: 53.762
- type: map_at_5
value: 55.718
- type: mrr_at_1
value: 72.856
- type: mrr_at_10
value: 79.245
- type: mrr_at_100
value: 79.515
- type: mrr_at_1000
value: 79.525
- type: mrr_at_3
value: 78.143
- type: mrr_at_5
value: 78.822
- type: ndcg_at_1
value: 72.856
- type: ndcg_at_10
value: 65.204
- type: ndcg_at_100
value: 68.552
- type: ndcg_at_1000
value: 69.902
- type: ndcg_at_3
value: 60.632
- type: ndcg_at_5
value: 63.161
- type: precision_at_1
value: 72.856
- type: precision_at_10
value: 13.65
- type: precision_at_100
value: 1.6260000000000001
- type: precision_at_1000
value: 0.181
- type: precision_at_3
value: 38.753
- type: precision_at_5
value: 25.251
- type: recall_at_1
value: 36.428
- type: recall_at_10
value: 68.25099999999999
- type: recall_at_100
value: 81.317
- type: recall_at_1000
value: 90.27
- type: recall_at_3
value: 58.13
- type: recall_at_5
value: 63.126000000000005
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 89.4868
- type: ap
value: 84.88319192880247
- type: f1
value: 89.46144458052846
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.282999999999998
- type: map_at_10
value: 33.045
- type: map_at_100
value: 34.238
- type: map_at_1000
value: 34.29
- type: map_at_3
value: 29.305999999999997
- type: map_at_5
value: 31.391000000000002
- type: mrr_at_1
value: 21.92
- type: mrr_at_10
value: 33.649
- type: mrr_at_100
value: 34.791
- type: mrr_at_1000
value: 34.837
- type: mrr_at_3
value: 30.0
- type: mrr_at_5
value: 32.039
- type: ndcg_at_1
value: 21.92
- type: ndcg_at_10
value: 39.729
- type: ndcg_at_100
value: 45.484
- type: ndcg_at_1000
value: 46.817
- type: ndcg_at_3
value: 32.084
- type: ndcg_at_5
value: 35.789
- type: precision_at_1
value: 21.92
- type: precision_at_10
value: 6.297
- type: precision_at_100
value: 0.918
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 13.639000000000001
- type: precision_at_5
value: 10.054
- type: recall_at_1
value: 21.282999999999998
- type: recall_at_10
value: 60.343999999999994
- type: recall_at_100
value: 86.981
- type: recall_at_1000
value: 97.205
- type: recall_at_3
value: 39.452999999999996
- type: recall_at_5
value: 48.333
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.47879616963064
- type: f1
value: 95.21800589958251
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.09256725946192
- type: f1
value: 60.554043889452515
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.53463349024882
- type: f1
value: 73.14418495756476
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.22663080026899
- type: f1
value: 79.331456217501
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.50316010430136
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.15612040042282
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.36227552557184
- type: mrr
value: 33.57901344209811
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.6610000000000005
- type: map_at_10
value: 12.992
- type: map_at_100
value: 16.756999999999998
- type: map_at_1000
value: 18.25
- type: map_at_3
value: 9.471
- type: map_at_5
value: 11.116
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 53.388999999999996
- type: mrr_at_100
value: 53.982
- type: mrr_at_1000
value: 54.033
- type: mrr_at_3
value: 51.858000000000004
- type: mrr_at_5
value: 53.019000000000005
- type: ndcg_at_1
value: 41.641
- type: ndcg_at_10
value: 34.691
- type: ndcg_at_100
value: 32.305
- type: ndcg_at_1000
value: 41.132999999999996
- type: ndcg_at_3
value: 40.614
- type: ndcg_at_5
value: 38.456
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.881999999999998
- type: precision_at_100
value: 8.483
- type: precision_at_1000
value: 2.131
- type: precision_at_3
value: 38.803
- type: precision_at_5
value: 33.87
- type: recall_at_1
value: 5.6610000000000005
- type: recall_at_10
value: 16.826
- type: recall_at_100
value: 32.939
- type: recall_at_1000
value: 65.161
- type: recall_at_3
value: 10.756
- type: recall_at_5
value: 13.331000000000001
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.692
- type: map_at_10
value: 41.065000000000005
- type: map_at_100
value: 42.235
- type: map_at_1000
value: 42.27
- type: map_at_3
value: 36.635
- type: map_at_5
value: 39.219
- type: mrr_at_1
value: 30.214000000000002
- type: mrr_at_10
value: 43.443
- type: mrr_at_100
value: 44.326
- type: mrr_at_1000
value: 44.352000000000004
- type: mrr_at_3
value: 39.623999999999995
- type: mrr_at_5
value: 41.898
- type: ndcg_at_1
value: 30.214000000000002
- type: ndcg_at_10
value: 48.692
- type: ndcg_at_100
value: 53.671
- type: ndcg_at_1000
value: 54.522000000000006
- type: ndcg_at_3
value: 40.245
- type: ndcg_at_5
value: 44.580999999999996
- type: precision_at_1
value: 30.214000000000002
- type: precision_at_10
value: 8.3
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 18.521
- type: precision_at_5
value: 13.627
- type: recall_at_1
value: 26.692
- type: recall_at_10
value: 69.699
- type: recall_at_100
value: 91.425
- type: recall_at_1000
value: 97.78099999999999
- type: recall_at_3
value: 47.711
- type: recall_at_5
value: 57.643
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.962
- type: map_at_10
value: 84.772
- type: map_at_100
value: 85.402
- type: map_at_1000
value: 85.418
- type: map_at_3
value: 81.89
- type: map_at_5
value: 83.685
- type: mrr_at_1
value: 81.67
- type: mrr_at_10
value: 87.681
- type: mrr_at_100
value: 87.792
- type: mrr_at_1000
value: 87.79299999999999
- type: mrr_at_3
value: 86.803
- type: mrr_at_5
value: 87.392
- type: ndcg_at_1
value: 81.69
- type: ndcg_at_10
value: 88.429
- type: ndcg_at_100
value: 89.66
- type: ndcg_at_1000
value: 89.762
- type: ndcg_at_3
value: 85.75
- type: ndcg_at_5
value: 87.20700000000001
- type: precision_at_1
value: 81.69
- type: precision_at_10
value: 13.395000000000001
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.507000000000005
- type: precision_at_5
value: 24.614
- type: recall_at_1
value: 70.962
- type: recall_at_10
value: 95.339
- type: recall_at_100
value: 99.543
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.54899999999999
- type: recall_at_5
value: 91.726
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.506631779239555
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.63731341848479
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.852
- type: map_at_10
value: 13.175
- type: map_at_100
value: 15.623999999999999
- type: map_at_1000
value: 16.002
- type: map_at_3
value: 9.103
- type: map_at_5
value: 11.068999999999999
- type: mrr_at_1
value: 23.9
- type: mrr_at_10
value: 35.847
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.018
- type: mrr_at_3
value: 32.300000000000004
- type: mrr_at_5
value: 34.14
- type: ndcg_at_1
value: 23.9
- type: ndcg_at_10
value: 21.889
- type: ndcg_at_100
value: 30.903000000000002
- type: ndcg_at_1000
value: 36.992000000000004
- type: ndcg_at_3
value: 20.274
- type: ndcg_at_5
value: 17.773
- type: precision_at_1
value: 23.9
- type: precision_at_10
value: 11.61
- type: precision_at_100
value: 2.4539999999999997
- type: precision_at_1000
value: 0.391
- type: precision_at_3
value: 19.133
- type: precision_at_5
value: 15.740000000000002
- type: recall_at_1
value: 4.852
- type: recall_at_10
value: 23.507
- type: recall_at_100
value: 49.775000000000006
- type: recall_at_1000
value: 79.308
- type: recall_at_3
value: 11.637
- type: recall_at_5
value: 15.947
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 86.03345827446948
- type: cos_sim_spearman
value: 80.53174518259549
- type: euclidean_pearson
value: 83.44538971660883
- type: euclidean_spearman
value: 80.57344324098692
- type: manhattan_pearson
value: 83.36528808195459
- type: manhattan_spearman
value: 80.48931287157902
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.21363088257881
- type: cos_sim_spearman
value: 75.56589127055523
- type: euclidean_pearson
value: 82.32868324521908
- type: euclidean_spearman
value: 75.31928550664554
- type: manhattan_pearson
value: 82.31332875713211
- type: manhattan_spearman
value: 75.35376322099196
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.09085593258487
- type: cos_sim_spearman
value: 86.26355088415221
- type: euclidean_pearson
value: 85.49646115361156
- type: euclidean_spearman
value: 86.20652472228703
- type: manhattan_pearson
value: 85.44084081123815
- type: manhattan_spearman
value: 86.1162623448951
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.68250248349368
- type: cos_sim_spearman
value: 82.29883673695083
- type: euclidean_pearson
value: 84.17633035446019
- type: euclidean_spearman
value: 82.19990511264791
- type: manhattan_pearson
value: 84.17408410692279
- type: manhattan_spearman
value: 82.249873895981
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.31878760045024
- type: cos_sim_spearman
value: 88.7364409031183
- type: euclidean_pearson
value: 88.230537618603
- type: euclidean_spearman
value: 88.76484309646318
- type: manhattan_pearson
value: 88.17689071136469
- type: manhattan_spearman
value: 88.72809249037928
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.41078559110638
- type: cos_sim_spearman
value: 85.27439135411049
- type: euclidean_pearson
value: 84.5333571592088
- type: euclidean_spearman
value: 85.25645460575957
- type: manhattan_pearson
value: 84.38428921610226
- type: manhattan_spearman
value: 85.07796040798796
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.82374132382576
- type: cos_sim_spearman
value: 89.02101343562433
- type: euclidean_pearson
value: 89.50729765458932
- type: euclidean_spearman
value: 89.04184772869253
- type: manhattan_pearson
value: 89.51737904059856
- type: manhattan_spearman
value: 89.12925950440676
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.56051823873482
- type: cos_sim_spearman
value: 68.50988748185463
- type: euclidean_pearson
value: 69.16524346147456
- type: euclidean_spearman
value: 68.61859952449579
- type: manhattan_pearson
value: 69.10618915706995
- type: manhattan_spearman
value: 68.36401769459522
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.4159693872625
- type: cos_sim_spearman
value: 87.07819121764247
- type: euclidean_pearson
value: 87.03013260863153
- type: euclidean_spearman
value: 87.06547293631309
- type: manhattan_pearson
value: 86.8129744446062
- type: manhattan_spearman
value: 86.88494096335627
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.47758088996575
- type: mrr
value: 96.17891458577733
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.538999999999994
- type: map_at_10
value: 66.562
- type: map_at_100
value: 67.254
- type: map_at_1000
value: 67.284
- type: map_at_3
value: 63.722
- type: map_at_5
value: 65.422
- type: mrr_at_1
value: 60.0
- type: mrr_at_10
value: 67.354
- type: mrr_at_100
value: 67.908
- type: mrr_at_1000
value: 67.93299999999999
- type: mrr_at_3
value: 65.056
- type: mrr_at_5
value: 66.43900000000001
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 70.858
- type: ndcg_at_100
value: 73.67099999999999
- type: ndcg_at_1000
value: 74.26700000000001
- type: ndcg_at_3
value: 65.911
- type: ndcg_at_5
value: 68.42200000000001
- type: precision_at_1
value: 60.0
- type: precision_at_10
value: 9.4
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 17.0
- type: recall_at_1
value: 57.538999999999994
- type: recall_at_10
value: 83.233
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 69.883
- type: recall_at_5
value: 76.19399999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82574257425742
- type: cos_sim_ap
value: 95.78722833053911
- type: cos_sim_f1
value: 90.94650205761316
- type: cos_sim_precision
value: 93.64406779661016
- type: cos_sim_recall
value: 88.4
- type: dot_accuracy
value: 99.83366336633664
- type: dot_ap
value: 95.89733601612964
- type: dot_f1
value: 91.41981613891727
- type: dot_precision
value: 93.42379958246346
- type: dot_recall
value: 89.5
- type: euclidean_accuracy
value: 99.82574257425742
- type: euclidean_ap
value: 95.75227035138846
- type: euclidean_f1
value: 90.96509240246407
- type: euclidean_precision
value: 93.45991561181435
- type: euclidean_recall
value: 88.6
- type: manhattan_accuracy
value: 99.82574257425742
- type: manhattan_ap
value: 95.76278266220176
- type: manhattan_f1
value: 91.08409321175279
- type: manhattan_precision
value: 92.29979466119097
- type: manhattan_recall
value: 89.9
- type: max_accuracy
value: 99.83366336633664
- type: max_ap
value: 95.89733601612964
- type: max_f1
value: 91.41981613891727
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 61.905425988638605
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.159589881679736
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.0605499476397
- type: mrr
value: 53.91594516594517
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.202718009067
- type: cos_sim_spearman
value: 31.136199912366987
- type: dot_pearson
value: 30.66329011927951
- type: dot_spearman
value: 30.107664909625107
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.209
- type: map_at_10
value: 1.712
- type: map_at_100
value: 9.464
- type: map_at_1000
value: 23.437
- type: map_at_3
value: 0.609
- type: map_at_5
value: 0.9440000000000001
- type: mrr_at_1
value: 78.0
- type: mrr_at_10
value: 86.833
- type: mrr_at_100
value: 86.833
- type: mrr_at_1000
value: 86.833
- type: mrr_at_3
value: 85.333
- type: mrr_at_5
value: 86.833
- type: ndcg_at_1
value: 74.0
- type: ndcg_at_10
value: 69.14
- type: ndcg_at_100
value: 53.047999999999995
- type: ndcg_at_1000
value: 48.577
- type: ndcg_at_3
value: 75.592
- type: ndcg_at_5
value: 72.509
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 73.0
- type: precision_at_100
value: 54.44
- type: precision_at_1000
value: 21.326
- type: precision_at_3
value: 80.667
- type: precision_at_5
value: 77.2
- type: recall_at_1
value: 0.209
- type: recall_at_10
value: 1.932
- type: recall_at_100
value: 13.211999999999998
- type: recall_at_1000
value: 45.774
- type: recall_at_3
value: 0.644
- type: recall_at_5
value: 1.0290000000000001
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.609
- type: map_at_10
value: 8.334999999999999
- type: map_at_100
value: 14.604000000000001
- type: map_at_1000
value: 16.177
- type: map_at_3
value: 4.87
- type: map_at_5
value: 6.3149999999999995
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 45.047
- type: mrr_at_100
value: 45.808
- type: mrr_at_1000
value: 45.808
- type: mrr_at_3
value: 41.497
- type: mrr_at_5
value: 43.231
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 21.193
- type: ndcg_at_100
value: 34.97
- type: ndcg_at_1000
value: 46.69
- type: ndcg_at_3
value: 24.823
- type: ndcg_at_5
value: 22.872999999999998
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 17.959
- type: precision_at_100
value: 7.4079999999999995
- type: precision_at_1000
value: 1.537
- type: precision_at_3
value: 25.85
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 2.609
- type: recall_at_10
value: 13.63
- type: recall_at_100
value: 47.014
- type: recall_at_1000
value: 83.176
- type: recall_at_3
value: 5.925
- type: recall_at_5
value: 8.574
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.80239999999999
- type: ap
value: 15.497911013214791
- type: f1
value: 56.258411577947285
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.00452744765139
- type: f1
value: 61.42228624410908
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 50.00516915962345
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.62317458425225
- type: cos_sim_ap
value: 72.95115658063823
- type: cos_sim_f1
value: 66.78976523344764
- type: cos_sim_precision
value: 66.77215189873418
- type: cos_sim_recall
value: 66.80738786279683
- type: dot_accuracy
value: 85.62317458425225
- type: dot_ap
value: 73.10385271517778
- type: dot_f1
value: 66.94853829427399
- type: dot_precision
value: 61.74242424242424
- type: dot_recall
value: 73.11345646437995
- type: euclidean_accuracy
value: 85.65893783155511
- type: euclidean_ap
value: 72.87428208473992
- type: euclidean_f1
value: 66.70919994896005
- type: euclidean_precision
value: 64.5910551025451
- type: euclidean_recall
value: 68.97097625329816
- type: manhattan_accuracy
value: 85.59933241938367
- type: manhattan_ap
value: 72.67282695064966
- type: manhattan_f1
value: 66.67537215983286
- type: manhattan_precision
value: 66.00310237849017
- type: manhattan_recall
value: 67.36147757255937
- type: max_accuracy
value: 85.65893783155511
- type: max_ap
value: 73.10385271517778
- type: max_f1
value: 66.94853829427399
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.69096130709822
- type: cos_sim_ap
value: 85.30326978668063
- type: cos_sim_f1
value: 77.747088683189
- type: cos_sim_precision
value: 75.4491451753115
- type: cos_sim_recall
value: 80.189405605174
- type: dot_accuracy
value: 88.43870066363954
- type: dot_ap
value: 84.62999949222983
- type: dot_f1
value: 77.3074661963551
- type: dot_precision
value: 73.93871239808828
- type: dot_recall
value: 80.99784416384355
- type: euclidean_accuracy
value: 88.70066363953894
- type: euclidean_ap
value: 85.34184508966621
- type: euclidean_f1
value: 77.76871756856931
- type: euclidean_precision
value: 74.97855917667239
- type: euclidean_recall
value: 80.77456113335386
- type: manhattan_accuracy
value: 88.68319944114566
- type: manhattan_ap
value: 85.3026464242333
- type: manhattan_f1
value: 77.66561049296294
- type: manhattan_precision
value: 74.4665818849795
- type: manhattan_recall
value: 81.15183246073299
- type: max_accuracy
value: 88.70066363953894
- type: max_ap
value: 85.34184508966621
- type: max_f1
value: 77.76871756856931
---
<h1 align="center">GIST small Embedding v0</h1>
*GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning*
The model is fine-tuned on top of the [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task).
The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions.
Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829)
# Data
The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:
- Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets)
- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb
The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`).
The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741).
The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.
The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.
# Usage
The model can be easily loaded using the Sentence Transformers library.
```Python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
revision = None # Replace with the specific revision to ensure reproducibility if the model is updated.
model = SentenceTransformer("avsolatorio/GIST-small-Embedding-v0", revision=revision)
texts = [
"Illustration of the REaLTabFormer model. The left block shows the non-relational tabular data model using GPT-2 with a causal LM head. In contrast, the right block shows how a relational dataset's child table is modeled using a sequence-to-sequence (Seq2Seq) model. The Seq2Seq model uses the observations in the parent table to condition the generation of the observations in the child table. The trained GPT-2 model on the parent table, with weights frozen, is also used as the encoder in the Seq2Seq model.",
"Predicting human mobility holds significant practical value, with applications ranging from enhancing disaster risk planning to simulating epidemic spread. In this paper, we present the GeoFormer, a decoder-only transformer model adapted from the GPT architecture to forecast human mobility.",
"As the economies of Southeast Asia continue adopting digital technologies, policy makers increasingly ask how to prepare the workforce for emerging labor demands. However, little is known about the skills that workers need to adapt to these changes"
]
# Compute embeddings
embeddings = model.encode(texts, convert_to_tensor=True)
# Compute cosine-similarity for each pair of sentences
scores = F.cosine_similarity(embeddings.unsqueeze(1), embeddings.unsqueeze(0), dim=-1)
print(scores.cpu().numpy())
```
# Training Parameters
Below are the training parameters used to fine-tune the model:
```
Epochs = 40
Warmup ratio = 0.1
Learning rate = 5e-6
Batch size = 16
Checkpoint step = 102000
Contrastive loss temperature = 0.01
```
# Evaluation
The model was evaluated using the [MTEB Evaluation](https://huggingface.co/mteb) suite.
# Citation
Please cite our work if you use GISTEmbed or the datasets we published in your projects or research. 🤗
```
@article{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
journal={arXiv preprint arXiv:2402.16829},
year={2024},
URL={https://arxiv.org/abs/2402.16829}
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# Acknowledgements
This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the [Knowledge for Change Program (KCP)](https://www.worldbank.org/en/programs/knowledge-for-change) of the World Bank - RA-P503405-RESE-TF0C3444.
The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. |
Darkhn/test-EXL2-3.0bpw-H6 | Darkhn | 2025-05-27T19:15:34Z | 0 | 0 | exllamav2 | [
"exllamav2",
"quantized",
"license:mit",
"region:us"
]
| null | 2025-05-27T19:14:55Z | ---
library_name: exllamav2
license: mit
tags:
- exllamav2
- quantized
---
# test-EXL2-3.0bpw-H6
EXL2 quantized model of `/mnt/test/output/merged_passthrough_20250527_185209_194400` (the original base model).
## Quantization Details
- **Bits per weight (bpw):** 3.0
- **Head Bits:** 6
- **Calibration Source:** Measurement derived from model weights (no explicit dataset calibration or provided measurement for this specific quantization pass).
Quantized using the [exllamav2 library](https://github.com/turboderp/exllamav2). |
bigband/FatherlyAthena | bigband | 2025-05-27T19:12:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T19:02:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
CCTV-Wiring-Cikgu-hd/Bocor.Video.CCTV.wiring.cikgu.video.nur.fadhilah.binti.zainal.guru.part.2.video | CCTV-Wiring-Cikgu-hd | 2025-05-27T19:10:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:07:43Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=CCTV-Wiring-Cikgu)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=CCTV-Wiring-Cikgu)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=CCTV-Wiring-Cikgu) |
Kikinoking/MNLP_M2_quantized_model | Kikinoking | 2025-05-27T19:09:04Z | 16 | 0 | null | [
"pytorch",
"safetensors",
"qwen3",
"causal-lm",
"qwen",
"fine-tuned",
"quantized",
"mnlp",
"8-bit",
"compressed-tensors",
"region:us"
]
| null | 2025-05-24T21:37:20Z | ---
tags:
- causal-lm
- qwen
- fine-tuned
- quantized
- mnlp
---
# Qwen3-0.6B Full-Precision + W8A8 Quantized MCQA Model
**Repository:** [Kikinoking/MNLP_M2_quantized_model](https://huggingface.co/Kikinoking/MNLP_M2_quantized_model)
This is a fine-tuned Qwen-3-0.6B causal-LM, trained on a concatenation of multiple MCQA datasets and then quantized to 8-bit weights and activations using the compressed-tensors format. It is designed for multiple-choice QA tasks, evaluated with the LightEval EPFL MNLP suite.
---
## Model Details
- **Base architecture:** Qwen-3 (0.6B parameters)
- **Pretrained checkpoint:** `Qwen/Qwen3-0.6B-Base`
- **Fine-tuning data sources:**
- ScienceQA
- QASC
- OpenBookQA
- MathQA
- CommonsenseQA
- MCQA prompts generated via ChatGPT (labeled `M1_chatgpt`)
- **Dataset split:** 95% train / 5% validation
- **Tokenization:**
- `AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B-Base")`
- Left padding, EOS token as pad_token
- Sequence length capped at 2048 tokens
---
## Quantization
- **Method:** `compressed-tensors` / `naive-quantized`
- **Precision:** 8-bit weights + 8-bit activations
- **Layers kept in FP32:** Language modeling head
- **Checkpoint:** Compatible with CPU and GPU inference
---
## Evaluation
Tested using LightEval EPFL MNLP on the MCQA task:
```bash
lighteval accelerate --eval-mode lighteval --save-details --override-batch-size 8 --custom-tasks community_tasks/mnlp_mcqa_evals.py --output-dir out/lighteval_quant model_configs/quantized_model.yaml "community|mnlp_mcqa_evals|0|0"
Results:
Accuracy: 0.30 ± 0.15
Normalized Accuracy: 0.30 ± 0.15
How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"Kikinoking/MNLP_M2_quantized_model", trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
"Kikinoking/MNLP_M2_quantized_model",
trust_remote_code=True,
device_map="auto",
)
License
Being a 0.6B-parameter model, it may struggle with very long or ambiguous queries.
Quantization can introduce a slight drop in accuracy (~5–10%).
License: CC BY-NC 4.0 (inherits from the base Qwen-3 license)
|
Lubna-qureshi-viral/full.lubna.qureshi.viral.video.highway.lubna.qureshi.and.manohar.lal.dhakad.official | Lubna-qureshi-viral | 2025-05-27T19:06:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T18:56:44Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Lubna-qureshi)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Lubna-qureshi)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Lubna-qureshi) |
Mohamed-Aly/BABYLM-TOKENIZER-CHAR-PHON | Mohamed-Aly | 2025-05-27T19:06:35Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T19:06:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Danielwu233/Llamma3.1-8B-Qlora | Danielwu233 | 2025-05-27T19:06:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T16:12:35Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Danielwu233
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jaisalmer-kaka-hd/18.jaisalmer.kaka.jaisalmer.kaka.viral.jaisalmer.kaka.original.here.TRENDING | jaisalmer-kaka-hd | 2025-05-27T19:05:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T18:57:20Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=jaisalmer-kaka)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=jaisalmer-kaka)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=jaisalmer-kaka) |
tylerachang/bigram-subnetworks-gpt2-large | tylerachang | 2025-05-27T19:04:43Z | 0 | 0 | null | [
"eng",
"arxiv:2504.15471",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T04:54:16Z |
---
license: apache-2.0
language:
- eng
---
# bigram-subnetworks-gpt2-large
We release bigram subnetworks as described in [Chang and Bergen (2025)](https://arxiv.org/abs/2504.15471).
These are sparse subsets of model parameters that recreate bigram predictions (next token predictions conditioned only on the current token) in Transformer language models.
This repository contains the bigram subnetwork for [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large).
## Format
A subnetwork file is a pickled Python dictionary that maps the original model parameter names to numpy binary masks with the same shapes as the original model parameters (1: keep, 0: drop).
For details on usage, see: https://github.com/tylerachang/bigram-subnetworks.
For details on how these subnetworks were trained, see [Chang and Bergen (2025)](https://arxiv.org/abs/2504.15471).
For minimal usage, download the code at https://github.com/tylerachang/bigram-subnetworks (or just the file `circuit_loading_utils.py`) and run in Python:
```
from circuit_loading_utils import load_bigram_subnetwork_dict, load_subnetwork_model
mask_dict = load_bigram_subnetwork_dict('openai-community/gpt2-large')
model, tokenizer, config = load_subnetwork_model('openai-community/gpt2-large', mask_dict)
```
## Citation
<pre>
@article{chang-bergen-2025-bigram,
title={Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models},
author={Chang, Tyler A. and Bergen, Benjamin K.},
journal={Preprint},
year={2025},
url={https://arxiv.org/abs/2504.15471},
}
</pre>
|
shaojintian/llaca-0.5B | shaojintian | 2025-05-27T19:03:03Z | 0 | 0 | null | [
"safetensors",
"ComplexFormer",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T19:00:25Z | ---
license: apache-2.0
---
|
BootesVoid/cmb6uw4gb071rlexprcgvwbtx_cmb6uzmsu072plexpgfl5fg4h | BootesVoid | 2025-05-27T19:01:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T19:01:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: vanessa_
---
# Cmb6Uw4Gb071Rlexprcgvwbtx_Cmb6Uzmsu072Plexpgfl5Fg4H
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `vanessa_` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "vanessa_",
"lora_weights": "https://huggingface.co/BootesVoid/cmb6uw4gb071rlexprcgvwbtx_cmb6uzmsu072plexpgfl5fg4h/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb6uw4gb071rlexprcgvwbtx_cmb6uzmsu072plexpgfl5fg4h', weight_name='lora.safetensors')
image = pipeline('vanessa_').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb6uw4gb071rlexprcgvwbtx_cmb6uzmsu072plexpgfl5fg4h/discussions) to add images that show off what you’ve made with this LoRA.
|
PRODRI007/ebooks | PRODRI007 | 2025-05-27T19:00:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T19:00:04Z | ---
license: apache-2.0
---
|
amaurypllx/MNLP_M2_quantized_model | amaurypllx | 2025-05-27T18:59:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T18:59:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HPLT/hplt2c_eng90-edu_fra10_checkpoints | HPLT | 2025-05-27T18:57:59Z | 0 | 0 | null | [
"pytorch",
"llama",
"HPLT",
"decoder",
"en",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T08:49:52Z | ---
language:
- en
tags:
- HPLT
- decoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 - Cleaned - English (90%)
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the decoder-only language models trained on [HPLT2.0_cleaned](https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned).
All the HPLT decoder-only models use the same hyper-parameters, roughly following the llama architecture with 2.15B parameters in total:
- hidden size: 2048
- attention heads: 32
- layers: 24
- sequence length: 2048
## Intermediate checkpoints
We are releasing intermediate checkpoints for each model at intervals of every 1000 training steps in separate branches. The naming convention is `checkpoint_00xxxx00`: for example, `checkpoint_0005000`. The checkpoints range from checkpoint_0001000 to checkpoint_0047684 and the latter is in the main branch.
## Cite us
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
``` |
punith0110/sft-tiny-chatbot | punith0110 | 2025-05-27T18:53:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:52:31Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: sft-tiny-chatbot
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="punith0110/sft-tiny-chatbot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hjghjgn/hjgjhj | hjghjgn | 2025-05-27T18:51:17Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2025-05-27T18:51:17Z | ---
license: bigscience-bloom-rail-1.0
---
|
aamijar/Llama-2-7b-hf-lora-r1024-boolq-portlora-epochs6 | aamijar | 2025-05-27T18:50:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:50:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/20405430-db49-4bce-a10d-37a0e37de08b | dimasik2987 | 2025-05-27T18:49:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:quantized:NousResearch/Nous-Capybara-7B-V1.9",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T17:25:31Z | ---
base_model: NousResearch/Nous-Capybara-7B-V1.9
library_name: transformers
model_name: 20405430-db49-4bce-a10d-37a0e37de08b
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 20405430-db49-4bce-a10d-37a0e37de08b
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik2987/20405430-db49-4bce-a10d-37a0e37de08b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/qgmz9hnx)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
manuross1/nrmmtrfckdfll500 | manuross1 | 2025-05-27T18:48:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T18:32:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfckdfll500
---
# Nrmmtrfckdfll500
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfckdfll500` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfckdfll500",
"lora_weights": "https://huggingface.co/manuross1/nrmmtrfckdfll500/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/nrmmtrfckdfll500', weight_name='lora.safetensors')
image = pipeline('nrmmtrfckdfll500').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 750
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/nrmmtrfckdfll500/discussions) to add images that show off what you’ve made with this LoRA.
|
beanne-valerie-hd/beanne.scandal.beanne.valerie.dela.cruz.beanne.valerie.dela.cruz.telegram | beanne-valerie-hd | 2025-05-27T18:46:36Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T18:44:33Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=beanne-valerie)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=beanne-valerie)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=beanne-valerie) |
zinec/finetuned-eval-qwen3-0.6B | zinec | 2025-05-27T18:46:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:41:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix3 | plumpyfield | 2025-05-27T18:44:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T18:44:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oskdabk/test_model_2 | oskdabk | 2025-05-27T18:41:57Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
]
| text-generation | 2025-05-27T18:41:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RobertoNeglia/pepe_generator_sd2 | RobertoNeglia | 2025-05-27T18:37:21Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2025-05-27T13:32:07Z | ---
base_model: stabilityai/stable-diffusion-2
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - RobertoNeglia/pepe_generator_sd2
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the RobertoNeglia/pepe_dataset dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Nasrin02/Nasrin | Nasrin02 | 2025-05-27T18:34:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T18:34:34Z | ---
license: apache-2.0
---
|
BootesVoid/cmb17aape05h4u1cgfybugm82_cmb6tq3zd06rvlexpqoqenle8 | BootesVoid | 2025-05-27T18:32:57Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T18:32:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LATINA
---
# Cmb17Aape05H4U1Cgfybugm82_Cmb6Tq3Zd06Rvlexpqoqenle8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LATINA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LATINA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb17aape05h4u1cgfybugm82_cmb6tq3zd06rvlexpqoqenle8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb17aape05h4u1cgfybugm82_cmb6tq3zd06rvlexpqoqenle8', weight_name='lora.safetensors')
image = pipeline('LATINA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb17aape05h4u1cgfybugm82_cmb6tq3zd06rvlexpqoqenle8/discussions) to add images that show off what you’ve made with this LoRA.
|
Mohamed-Aly/BABYLM-TOKENIZER-BPE-PHON-SPACELESS | Mohamed-Aly | 2025-05-27T18:32:07Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:32:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BeckerAnas/still-universe-209 | BeckerAnas | 2025-05-27T18:31:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T10:46:05Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: still-universe-209
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# still-universe-209
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5721
- Accuracy: 0.6497
- Precision: 0.6965
- Recall: 0.6497
- F1: 0.6583
- Roc Auc: 0.8795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 1.379 | 1.0 | 17 | 1.3021 | 0.4544 | 0.5543 | 0.4544 | 0.4617 | 0.7377 |
| 1.3695 | 2.0 | 34 | 1.2286 | 0.5391 | 0.5315 | 0.5391 | 0.5266 | 0.7806 |
| 1.1945 | 3.0 | 51 | 1.1127 | 0.5794 | 0.5651 | 0.5794 | 0.5647 | 0.8098 |
| 1.0517 | 4.0 | 68 | 0.8318 | 0.5872 | 0.6134 | 0.5872 | 0.5961 | 0.8273 |
| 1.0323 | 5.0 | 85 | 0.8958 | 0.5156 | 0.6297 | 0.5156 | 0.5319 | 0.8189 |
| 0.9029 | 6.0 | 102 | 0.7313 | 0.5365 | 0.6126 | 0.5365 | 0.5398 | 0.8267 |
| 0.9002 | 7.0 | 119 | 0.7217 | 0.5794 | 0.5998 | 0.5794 | 0.5558 | 0.8445 |
| 0.7855 | 8.0 | 136 | 0.6522 | 0.6029 | 0.6629 | 0.6029 | 0.6064 | 0.8581 |
| 0.756 | 9.0 | 153 | 0.6371 | 0.5964 | 0.6263 | 0.5964 | 0.5653 | 0.8643 |
| 0.7164 | 10.0 | 170 | 0.6291 | 0.5690 | 0.6930 | 0.5690 | 0.5780 | 0.8579 |
| 0.6894 | 11.0 | 187 | 0.6194 | 0.5938 | 0.6360 | 0.5938 | 0.5735 | 0.8699 |
| 0.6606 | 12.0 | 204 | 0.5834 | 0.6289 | 0.6906 | 0.6289 | 0.6402 | 0.8742 |
| 0.6273 | 13.0 | 221 | 0.5766 | 0.6510 | 0.6972 | 0.6510 | 0.6607 | 0.8780 |
| 0.6046 | 14.0 | 238 | 0.5732 | 0.6497 | 0.6965 | 0.6497 | 0.6583 | 0.8790 |
| 0.6255 | 15.0 | 255 | 0.5721 | 0.6497 | 0.6965 | 0.6497 | 0.6583 | 0.8795 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Jobz-Hunting-pakistani-viral-videos/EXCLUSIVE.VIDEO.NOW.leaked.Jobz.Hunting.Sajal.Malik.viral.video.original | Jobz-Hunting-pakistani-viral-videos | 2025-05-27T18:30:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T18:30:01Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
Hsianchengfun/1B-80epoch | Hsianchengfun | 2025-05-27T18:29:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T18:26:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joshcd/MNLP_M2_document_encoder | joshcd | 2025-05-27T18:29:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T18:15:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/sarvam-m-8bit | mlx-community | 2025-05-27T18:29:03Z | 2 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"bn",
"hi",
"kn",
"gu",
"mr",
"ml",
"or",
"pa",
"ta",
"te",
"base_model:sarvamai/sarvam-m",
"base_model:finetune:sarvamai/sarvam-m",
"license:apache-2.0",
"8-bit",
"region:us"
]
| text-generation | 2025-05-27T00:12:38Z | ---
library_name: mlx
license: apache-2.0
language:
- en
- bn
- hi
- kn
- gu
- mr
- ml
- or
- pa
- ta
- te
base_model: sarvamai/sarvam-m
base_model_relation: finetune
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/sarvam-m-8bit
This model [mlx-community/sarvam-m-8bit](https://huggingface.co/mlx-community/sarvam-m-8bit) was
converted to MLX format from [sarvamai/sarvam-m](https://huggingface.co/sarvamai/sarvam-m)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/sarvam-m-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Krashouse/Flux_nastya | Krashouse | 2025-05-27T18:27:31Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-27T15:47:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
keko24/MNLP_M2_mcqa_model-W4A8-Dynamic-Per-Token | keko24 | 2025-05-27T18:24:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
]
| text-generation | 2025-05-27T18:23:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RizhongLin/MNLP_M2_dpo_model | RizhongLin | 2025-05-27T18:24:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T18:23:39Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MicPulseGh3/MicPulseGH | MicPulseGh3 | 2025-05-27T18:22:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T18:22:06Z | ---
license: apache-2.0
---
|
margaritamikhelson/MNLP_M2_mcqa_model | margaritamikhelson | 2025-05-27T18:21:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-27T18:20:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hopvfds/bhdfgffg | hopvfds | 2025-05-27T18:18:13Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2025-05-27T18:18:13Z | ---
license: bigscience-bloom-rail-1.0
---
|
Bonnief/mbert-am-100k-finetuned-II | Bonnief | 2025-05-27T18:14:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-27T11:28:46Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mbert-am-100k-finetuned-II
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-am-100k-finetuned-II
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.2069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
katrina-lim-viral-videos/VIDEO.18.Katrina.Lim.Kiffy.Viral.Video.Full.Video.Original.Clip | katrina-lim-viral-videos | 2025-05-27T18:14:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T18:14:06Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
DoniaGasmii/MNLP_M2_dpo_pure_pref | DoniaGasmii | 2025-05-27T18:13:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:13:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtl-llm/qwen2.5coder-7b-origen-all-ordered-verilog-first | rtl-llm | 2025-05-27T18:12:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T18:09:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ChrisKalahiki/mt0-large-lora | ChrisKalahiki | 2025-05-27T18:11:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T18:11:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vermoney/85fa0ba2-a848-4e57-a3c6-2be4516cf67d | vermoney | 2025-05-27T18:11:41Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:quantized:NousResearch/Nous-Capybara-7B-V1.9",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T17:31:28Z | ---
base_model: NousResearch/Nous-Capybara-7B-V1.9
library_name: transformers
model_name: 85fa0ba2-a848-4e57-a3c6-2be4516cf67d
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 85fa0ba2-a848-4e57-a3c6-2be4516cf67d
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vermoney/85fa0ba2-a848-4e57-a3c6-2be4516cf67d", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/05823jmk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rtl-llm/qwen2.5coder-7b-origen-vhdl-verilog-vhdl-pymtl | rtl-llm | 2025-05-27T18:10:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T18:06:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dhintech/marian-ted2020-id-en-lg | dhintech | 2025-05-27T18:09:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"indonesian",
"english",
"fine-tuned",
"meeting-translation",
"domain-adaptation",
"enhanced",
"id",
"en",
"dataset:ted_talks_iwslt",
"base_model:Helsinki-NLP/opus-mt-id-en",
"base_model:finetune:Helsinki-NLP/opus-mt-id-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2025-05-27T12:51:05Z | ---
language:
- id
- en
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-id-en
tags:
- translation
- indonesian
- english
- marian
- fine-tuned
- meeting-translation
- domain-adaptation
- enhanced
pipeline_tag: translation
datasets:
- ted_talks_iwslt
library_name: transformers
metrics:
- bleu
- rouge
widget:
- text: "Selamat pagi semuanya, mari kita mulai rapat hari ini."
example_title: "Meeting Opening"
- text: "Tim marketing akan bertanggung jawab untuk strategi ini."
example_title: "Task Assignment"
- text: "Database migration sudah selesai dan berjalan dengan lancar."
example_title: "Technical Update"
---
# Enhanced MarianMT Indonesian-English Translation (Meeting Domain Adaptation)
This model is an **enhanced fine-tuned version** of [Helsinki-NLP/opus-mt-id-en](https://huggingface.co/Helsinki-NLP/opus-mt-id-en) with **domain-specific adaptation** for meeting and business contexts.
## 🎯 Model Highlights
- **Domain Adaptation**: Specialized for meeting and business translation
- **Enhanced Dataset**: TED2020 + 2000+ meeting-specific sentence pairs
- **Improved Performance**: Better BLEU scores on meeting contexts
- **Robust Training**: 80% dataset usage with domain mixing
- **Production Ready**: Optimized for real-world meeting scenarios
## 📊 Performance Metrics
| Metric | Base Model | This Model | Improvement |
|--------|------------|------------|-------------|
| BLEU Score | 1.467 | **3.736** | **+154.6%** |
| Translation Speed | 1.2s | **0.14s** | **-88.2%** |
| Meeting Context | Standard | **Enhanced** | **Domain Adapted** |
## 🚀 Model Details
- **Base Model**: Helsinki-NLP/opus-mt-id-en
- **Training Dataset**: TED2020 (80%) + Meeting Domain (10%)
- **Training Strategy**: Domain adaptation with enhanced learning
- **Specialization**: Business meetings, technical discussions, formal conversations
- **Training Date**: 2025-05-27
- **Languages**: Indonesian (id) → English (en)
- **License**: Apache 2.0
## 🛠️ Usage
```python
from transformers import MarianMTModel, MarianTokenizer
# Load model and tokenizer
model_name = "dhintech/marian-ted2020-id-en-lg"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
# Translate Indonesian to English
def translate(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
outputs = model.generate(
**inputs,
max_length=128,
num_beams=3,
early_stopping=True,
do_sample=False
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
indonesian_text = "Tim marketing akan bertanggung jawab untuk strategi ini."
english_translation = translate(indonesian_text)
print(english_translation)
# Output: "The marketing team will be responsible for this strategy."
```
## 📝 Example Translations
### Meeting Context Examples
| Indonesian | English | Context |
|------------|---------|---------|
| Selamat pagi semuanya, mari kita mulai rapat hari ini. | Good morning everyone, let's start today's meeting. | Meeting Opening |
| Tim marketing akan bertanggung jawab untuk strategi ini. | The marketing team will be responsible for this strategy. | Task Assignment |
| Database migration sudah selesai dan berjalan dengan lancar. | Database migration is complete and running smoothly. | Technical Update |
| Budget yang disetujui adalah 500 juta rupiah. | The approved budget is 500 million rupiah. | Financial Discussion |
## 🎯 Intended Use Cases
- **Business Meeting Translation**: Real-time translation during meetings
- **Technical Documentation**: Translating technical meeting notes
- **Corporate Communication**: Formal business correspondence
- **Project Management**: Translating project updates and reports
- **Training Materials**: Educational and training content translation
## 📊 Training Configuration
- **Dataset Size**: 118,626 sentence pairs
- **TED2020 Data**: 80% of cleaned dataset
- **Meeting Domain Data**: 10% specialized meeting content
- **Max Sequence Length**: 128 tokens
- **Training Epochs**: 12
- **Learning Rate**: 1e-05
- **Batch Size**: 12 (effective)
## 🔧 Technical Specifications
- **Model Architecture**: MarianMT (Transformer-based)
- **Parameters**: ~74M (with selective fine-tuning)
- **Max Input/Output Length**: 128 tokens
- **Inference Time**: ~0.14s per sentence
- **Memory Requirements**:
- GPU: 3GB VRAM minimum
- CPU: 4GB RAM minimum
## 🚨 Limitations
- **Domain Specificity**: Optimized for formal business/meeting contexts
- **Informal Language**: May not perform optimally on very casual Indonesian
- **Regional Dialects**: Trained primarily on standard Indonesian
- **Cultural Context**: Some cultural nuances may be lost in translation
## 📚 Citation
```bibtex
@misc{enhanced-marian-id-en-2025,
title={Enhanced MarianMT Indonesian-English Translation (Meeting Domain Adaptation)},
author={DhinTech},
year={2025},
publisher={Hugging Face},
journal={Hugging Face Model Hub},
howpublished={\url{https://huggingface.co/dhintech/marian-id-en-enhanced}},
note={Enhanced with TED2020 and meeting-specific domain adaptation}
}
```
## 🙏 Acknowledgments
- **Base Model**: Helsinki-NLP team for the original opus-mt-id-en model
- **Dataset**: TED2020 corpus and custom meeting domain data
- **Framework**: Hugging Face Transformers team
---
*This model is specifically enhanced for Indonesian business meeting translation scenarios with domain adaptation techniques.*
|
JorgeTC/electra-corrected-POS | JorgeTC | 2025-05-27T18:08:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-27T18:08:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
18-Katrina-Lim-Kiffy-Viral-Video-Link-hd/INDIA.FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | 18-Katrina-Lim-Kiffy-Viral-Video-Link-hd | 2025-05-27T18:08:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T18:08:07Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
JorgeTC/miniLM-corrected-POS | JorgeTC | 2025-05-27T18:03:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-27T18:03:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hexuan21/Qwen2.5-7B-EnergyQA_lora | hexuan21 | 2025-05-27T18:01:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
]
| null | 2025-05-27T17:16:00Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
slang88/gemma-sql | slang88 | 2025-05-27T17:59:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T16:09:42Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="slang88/gemma-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phunghuy159/full_model_sft | phunghuy159 | 2025-05-27T17:59:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T17:44:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
othoi-113-viral-video-link-hdq/exclusive.link.othoiiii.viral.video.link.othoi.viral.video.link.1.13.second | othoi-113-viral-video-link-hdq | 2025-05-27T17:58:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T17:57:14Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
maksymveremchuk/deepseek_qwen_32B_v2.1 | maksymveremchuk | 2025-05-27T17:56:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T17:54:42Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** maksymveremchuk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luis-orvium/prueba-memo-desde-checkpoint | luis-orvium | 2025-05-27T17:54:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T17:53:50Z | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** luis-orvium
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maksymveremchuk/deepseek_qwen_23B_v2.1 | maksymveremchuk | 2025-05-27T17:53:12Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T17:53:12Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** maksymveremchuk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
othoi-113-viral-video-link-4k-hd/Original.othoiiii.viral.video.link.othoi.viral.video.link.1.13.second | othoi-113-viral-video-link-4k-hd | 2025-05-27T17:51:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T17:51:20Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6sbsy206islexp4uw5jtb4 | BootesVoid | 2025-05-27T17:51:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T17:51:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LIA
---
# Cmay2E8B8038Bu1Cguoswiyvb_Cmb6Sbsy206Islexp4Uw5Jtb4
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LIA",
"lora_weights": "https://huggingface.co/BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6sbsy206islexp4uw5jtb4/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6sbsy206islexp4uw5jtb4', weight_name='lora.safetensors')
image = pipeline('LIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6sbsy206islexp4uw5jtb4/discussions) to add images that show off what you’ve made with this LoRA.
|
aldigobbler/smol-moe-360M-v0.1 | aldigobbler | 2025-05-27T17:51:15Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T17:48:52Z | # smol-moe-360M-v0.1
**Experimental Sparse MoE (Mixture of Experts) with 4x 360M Llama model (smollmv2)s**
Router is a learned gating network, experts are:
- HuggingFaceTB/SmolLM2-360M-Instruct
- motexture/SmolLCoder-360M-Instruct
- prithivMLmods/SmolLM2-CoT-360M
- quwsarohi/SmolThink
## Training
- Dataset: [`flytech/python-codes-25k`](https://huggingface.co/datasets/flytech/python-codes-25k)
- Each sample is formatted as a chat:
```
[
{"role": "user", "content": instruction},
{"role": "assistant", "content": output}
]
```
- MoE layers at: 8, 12, 16, 20, 24, 28 (out of 32 total)
- Top-2 routing (each token activates 2 out of 4 experts)
- Trained for a few epochs, batch size 4, gradient accumulation 8, max length 1024
- Used AdamW, linear warmup, and auxiliary load balancing loss
## Model
- Total params: ~1.5B (but only 2 experts active per token, so much faster than a dense 4x model)
- All expert MLPs are included in the checkpoint, you don’t need the original models
- Router and experts are trained end-to-end
- Checkpoints include: `pytorch_model.bin` (full model) and `config.json` (architecture info)
## Results
### COME BACK LATER ITS TRAINING
## Notes
- This is a real MoE: router is learned, experts are tied into the same model, and routing is sparse (top-2).
- For research/experimentation only.
- If you make something cool with it, let me know!
---
*smol-moe-360M-v0.1: for science, for fun, for smol code* |
BootesVoid/cmb68j487037slexpyp14cyxw_cmb69dzvn03avlexphlqqxvt8 | BootesVoid | 2025-05-27T17:49:43Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T17:49:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jaylin
---
# Cmb68J487037Slexpyp14Cyxw_Cmb69Dzvn03Avlexphlqqxvt8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jaylin` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jaylin",
"lora_weights": "https://huggingface.co/BootesVoid/cmb68j487037slexpyp14cyxw_cmb69dzvn03avlexphlqqxvt8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb68j487037slexpyp14cyxw_cmb69dzvn03avlexphlqqxvt8', weight_name='lora.safetensors')
image = pipeline('jaylin').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb68j487037slexpyp14cyxw_cmb69dzvn03avlexphlqqxvt8/discussions) to add images that show off what you’ve made with this LoRA.
|
Farmerobot/deepseek-r1-among-them | Farmerobot | 2025-05-27T17:48:06Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-24T17:16:19Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
reza-rgb/MNLP_M2_dpo_model | reza-rgb | 2025-05-27T17:47:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T17:45:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Smxldo/MNLP_M2_document_encoder | Smxldo | 2025-05-27T17:44:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-21T09:33:52Z | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
nielsgl/olmOCR-7B-0225-preview-8bit | nielsgl | 2025-05-27T17:43:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"mlx",
"conversational",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-27T17:32:28Z | ---
language:
- en
license: apache-2.0
datasets:
- allenai/olmOCR-mix-0225
base_model:
- Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
tags:
- mlx
---
# nielsgl/olmOCR-7B-0225-preview-8bit
This model was converted to MLX format from [`allenai/olmOCR-7B-0225-preview`]() using mlx-vlm version **0.1.26**.
Refer to the [original model card](https://huggingface.co/allenai/olmOCR-7B-0225-preview) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model nielsgl/olmOCR-7B-0225-preview-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
09-Sophie-Rain-Sophie-Rain-SpiderMan-Video/Sophie.Rain.Sophie.Rain.Spiderman.Video.Tutorial.Viral.Full.Video | 09-Sophie-Rain-Sophie-Rain-SpiderMan-Video | 2025-05-27T17:42:59Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T17:42:47Z | 18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
eth-nlped/TutorRL-7B-think | eth-nlped | 2025-05-27T17:42:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"math-tutor",
"grpo",
"conversational",
"dataset:SynthLabsAI/Big-Math-RL-Verified",
"arxiv:2505.15607",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T11:59:52Z | ---
library_name: transformers
license: apache-2.0
license_link: https://github.com/eth-lre/PedagogicalRL/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- math-tutor
- grpo
datasets:
- SynthLabsAI/Big-Math-RL-Verified
---
# TutorRL-7B-think
## Overview
**TutorRL-7B-think** is a fine-tuned variant of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), trained to act as a math **tutor** rather than a solver. It is aligned to pedagogical principles using **reinforcement learning (GRPO)** in a synthetic multi-turn classroom setting, without requiring any human-labeled data.
This model was developed as part of the research project [*From Problem-Solving to Teaching Problem-Solving*](https://arxiv.org/abs/2505.15607), which proposes a scalable, annotation-free approach to training LLMs as **educational tutors**. Instead of directly answering questions, the model is optimized to scaffold reasoning, guide through Socratic questioning, and withhold final solutions when beneficial for learning.
Repository: [https://github.com/eth-lre/PedagogicalRL](https://github.com/eth-lre/PedagogicalRL)
## Intended Use
This model is intended for use in:
* Interactive math tutoring
* Socratic dialogue generation
* Research on educational alignment of LLMs
* Safe and indirect teaching in problem-solving contexts
## Thinking
This model variant allows for hidden thinking.
The thinking content is enclosed in tags: `<think> ... </think>`.
## Example Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "eth-nlped/TutorRL-7B-think"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "user", "content": "Can you help me solve 3x + 5 = 20?"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
If you use this model or build upon the training framework, please cite:
```
@misc{dinucujianu2025problemsolvingteachingproblemsolvingaligning,
title={From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning},
author={David Dinucu-Jianu and Jakub Macina and Nico Daheim and Ido Hakimi and Iryna Gurevych and Mrinmaya Sachan},
year={2025},
eprint={2505.15607},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.15607}
}
``` |
ngfh54456/bvcvdfsa | ngfh54456 | 2025-05-27T17:41:53Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
]
| null | 2025-05-27T17:41:53Z | ---
license: bigcode-openrail-m
---
|
dwi1205/A1B2C3 | dwi1205 | 2025-05-27T17:41:17Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T17:41:17Z | ---
license: apache-2.0
---
|
Mohamed-Aly/BABYLM-TOKENIZER-CHAR-TXT | Mohamed-Aly | 2025-05-27T17:40:43Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T17:40:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
07-Sophie-Rain-Sophie-Rain-SpiderMan-Video/Sophie.Rain.Spiderman.Video.Tutorial.Viral.Full.Video | 07-Sophie-Rain-Sophie-Rain-SpiderMan-Video | 2025-05-27T17:40:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T17:39:35Z | 18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
jegeblad/poca-SoccerTwos | jegeblad | 2025-05-27T17:40:02Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2025-05-27T06:17:59Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jegeblad/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
moelanoby/ALM-Qwen-0.5B-testing | moelanoby | 2025-05-27T17:35:41Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T13:55:25Z | ---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
---
# ALM-Qwen Model: ALM-Qwen-0.5B-testing
This repository contains an Attention-Linked Memory augmented Qwen model (ALM-Qwen).
## Model Components
* **AttentionLinkedMemory (ALM)**: A custom PyTorch module for two-level attention-based retrieval from structured memory. (See `ALM.py`)
* **QwenGenerator**: Wraps a Hugging Face Qwen model (e.g., Qwen2.5-0.5B-Instruct or Qwen2.5-7B-Instruct) for text generation.
* **ALMQwenModel_HF**: The main class orchestrating the ALM retrieval and Qwen generation. (See `alm_qwen.py`)
* **Saved Weights & Config**:
* `alm_layer_state_dict.pth`: Trained weights for the ALM layer.
* `alm_qwen_hf_config.json`: Configuration for the `ALMQwenModel_HF`, including ALM parameters and paths to the Qwen components.
* `qwen_generator/`: Contains the saved Hugging Face Qwen model and tokenizer.
## How to Use
1. **Prerequisites**:
```bash
pip install torch transformers huggingface_hub sentencepiece accelerate
# Add other dependencies if any, e.g., bitsandbytes for quantization
```
2. **Clone the repository (or download files manually)**:
```bash
git lfs install # if large files are used, though typically not for these components directly
git clone https://huggingface.co/moelanoby/ALM-Qwen-0.5B-testing
cd ALM-Qwen-0.5B-testing
```
3. **Load the model in Python**:
```python
from alm_qwen import ALMQwenModel_HF # Make sure alm_qwen_hf.py and ALM.py are in your PYTHONPATH
import torch
# Desired device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Path to the directory where you cloned/downloaded the model
model_directory = "." # Or the specific path if you are running from outside the cloned repo
# Load the model
loaded_model = ALMQwenModel_HF.load_model(model_directory, device=device)
print("ALM-Qwen model loaded successfully!")
# --- Prepare Dummy Input Data (similar to the example in alm_qwen_hf.py) ---
# batch_size = 1
# alm_query_dim = loaded_model.alm_config['query_dim']
# alm_memory_dim = loaded_model.alm_config['memory_dim']
# num_kb_buckets = 3 # Example
# max_kb_items_per_bucket = 5 # Example
# query_texts = ["What is the capital of France?"]
# query_embeddings_for_alm = torch.randn(batch_size, alm_query_dim)
# memory_item_embeddings = torch.randn(batch_size, num_kb_buckets, max_kb_items_per_bucket, alm_memory_dim)
# memory_text_items = [[["Paris is the capital of France." for _ in range(max_kb_items_per_bucket)] for _ in range(num_kb_buckets)] for _ in range(batch_size)]
# memory_mask = torch.ones(batch_size, num_kb_buckets, max_kb_items_per_bucket, dtype=torch.bool)
# memory_mask[:, :, -1] = False # Example mask
# # Run inference
# generated_answers, _, _ = loaded_model(
# query_texts,
# query_embeddings_for_alm,
# memory_item_embeddings,
# memory_text_items,
# memory_mask
# )
# print(f"Query: {query_texts[0]}")
# print(f"Answer: {generated_answers[0]}")
```
## Training
The ALM layer (`alm_layer_state_dict.pth`) might have been trained. The Qwen model inside `qwen_generator/` is typically a pre-trained model from Hugging Face, possibly fine-tuned.
## Notes
* The Qwen model components can be large. Ensure you have sufficient disk space and network bandwidth.
* The `load_model` method in `alm_qwen_hf.py` handles the reconstruction of the composite model.
* If any errors happen use alm_qwen.py directly
--- |
aldigobbler/smollmv2-135Mx3E-MoE-v0.1 | aldigobbler | 2025-05-27T17:29:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T15:37:44Z | # !! "moe" - routed inference between 3 different models without any tying
experimental MoE with 3 experts totalling 480m~ params
router is roughly 70M params
no loss chart for this
router trained on 15 samples |
davanstrien/SmolLM2-360M-tldr-sft-2025-05-27_18-14 | davanstrien | 2025-05-27T17:29:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T16:15:27Z | ---
base_model: HuggingFaceTB/SmolLM2-360M
library_name: transformers
model_name: SmolLM2-360M-tldr-sft-2025-05-27_18-14
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolLM2-360M-tldr-sft-2025-05-27_18-14
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="davanstrien/SmolLM2-360M-tldr-sft-2025-05-27_18-14", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/davanstrien/huggingface/runs/tsp2cqil)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lisabdunlap/balanced_sft_long_e10 | lisabdunlap | 2025-05-27T17:29:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T17:28:07Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MarceauBBB/qwen3-0.6B-Base-ORPO-OpenAnswers | MarceauBBB | 2025-05-27T17:26:46Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T21:14:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AShi846/MNLP_M2_document_encoder | AShi846 | 2025-05-27T17:22:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-27T14:33:12Z | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
OlofBen/HeartLM-v4.3 | OlofBen | 2025-05-27T17:22:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T17:05:36Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits