modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 12:28:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 12:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
karthikeyan-r/slm-custom-model_6k | karthikeyan-r | "2024-12-21T03:39:23Z" | 123 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-21T03:39:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aroot/eng-guj-simcse_central_ssrl | aroot | "2023-07-07T23:42:40Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-07T23:24:29Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_central_ssrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_central_ssrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2825
- Bleu: 2.5968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DevQuasar/OpenLLM-France.Claire-7B-EN-0.1-GGUF | DevQuasar | "2025-02-23T05:19:00Z" | 0 | 0 | null | [
"text-generation",
"base_model:OpenLLM-France/Claire-7B-EN-0.1",
"base_model:finetune:OpenLLM-France/Claire-7B-EN-0.1",
"region:us"
] | text-generation | "2025-02-23T05:18:59Z" | ---
base_model:
- OpenLLM-France/Claire-7B-EN-0.1
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [OpenLLM-France/Claire-7B-EN-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-EN-0.1)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Mik99/phi-2_test_05 | Mik99 | "2024-01-16T11:55:10Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | "2024-01-16T11:54:51Z" | ---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
dvs/swin-tiny-patch4-window7-224-mulder-v-scully-colab2 | dvs | "2023-08-20T18:18:23Z" | 215 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-08-20T18:14:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-mulder-v-scully-colab2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-mulder-v-scully-colab2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5344
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6899 | 0.5 |
| No log | 2.0 | 2 | 0.6701 | 0.25 |
| No log | 3.0 | 3 | 0.6309 | 0.5 |
| No log | 4.0 | 4 | 0.6049 | 0.5 |
| No log | 5.0 | 5 | 0.5828 | 0.5 |
| No log | 6.0 | 6 | 0.5650 | 0.75 |
| No log | 7.0 | 7 | 0.5486 | 0.75 |
| No log | 8.0 | 8 | 0.5344 | 1.0 |
| No log | 9.0 | 9 | 0.5240 | 1.0 |
| 0.2978 | 10.0 | 10 | 0.5149 | 1.0 |
| 0.2978 | 11.0 | 11 | 0.5066 | 1.0 |
| 0.2978 | 12.0 | 12 | 0.4980 | 1.0 |
| 0.2978 | 13.0 | 13 | 0.4880 | 1.0 |
| 0.2978 | 14.0 | 14 | 0.4699 | 1.0 |
| 0.2978 | 15.0 | 15 | 0.4507 | 1.0 |
| 0.2978 | 16.0 | 16 | 0.4310 | 1.0 |
| 0.2978 | 17.0 | 17 | 0.4155 | 1.0 |
| 0.2978 | 18.0 | 18 | 0.4054 | 1.0 |
| 0.2978 | 19.0 | 19 | 0.3994 | 1.0 |
| 0.1751 | 20.0 | 20 | 0.3970 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LoneStriker/Qwen1.5-7B-Chat-8.0bpw-h8-exl2 | LoneStriker | "2024-02-17T14:36:27Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.16609",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-17T14:32:53Z" | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-7B-Chat
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-7B-Chat",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ-Int4`, `Qwen1.5-7B-Chat-GPTQ-Int8`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
EleutherAI/pythia-1b-capitals-first | EleutherAI | "2024-03-01T20:26:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-01T20:26:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/merging-5 | appvoid | "2024-05-06T04:20:24Z" | 141 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:appvoid/palmer-003",
"base_model:finetune:appvoid/palmer-003",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-06T04:19:42Z" | ---
base_model:
- appvoid/palmer-003
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/palmer-003
layer_range: [0, 12]
- sources:
- model: appvoid/palmer-003
layer_range: [10, 18]
- sources:
- model: appvoid/palmer-003
layer_range: [16, 21]
merge_method: passthrough
dtype: float16
```
|
daniel40/ab2a86c0-4891-4294-b385-884f16593924 | daniel40 | "2025-01-31T04:28:36Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-128k",
"region:us"
] | null | "2025-01-31T04:16:18Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ab2a86c0-4891-4294-b385-884f16593924
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8448f56e3e3be066_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8448f56e3e3be066_train_data.json
type:
field_instruction: src
field_output: tgt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/ab2a86c0-4891-4294-b385-884f16593924
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/8448f56e3e3be066_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f0b1e7f4-c2f1-4505-8e1f-27347451ce0c
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f0b1e7f4-c2f1-4505-8e1f-27347451ce0c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ab2a86c0-4891-4294-b385-884f16593924
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.1092 |
| 4.3905 | 0.0018 | 13 | 0.7668 |
| 2.8384 | 0.0035 | 26 | 0.6061 |
| 2.3033 | 0.0053 | 39 | 0.5709 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sfeucht/mamba_lenses | sfeucht | "2024-02-14T01:48:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-02-08T19:40:00Z" | To load checkpoint, use the `TunedLens` class in the `train_tunedlens_mamba.py` file.
```
import accelerate
lens = TunedLens(model.backbone.layers, d_model_hidden_states).to("cuda")
ckpt = 'tunedlens_34_mamba-130m.safetensors'
lens = accelerate.load_checkpoint_and_dispatch(lens, ckpt)
``` |
jcashmoney123/test-model | jcashmoney123 | "2022-07-25T16:16:07Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"unk",
"dataset:jcashmoney123/autotrain-data-test-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-07-25T16:12:07Z" | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- jcashmoney123/autotrain-data-test-summarization
co2_eq_emissions: 6.160395825083539
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1177143826
- CO2 Emissions (in grams): 6.160395825083539
## Validation Metrics
- Loss: 2.9017226696014404
- Rouge1: 21.6224
- Rouge2: 5.6481
- RougeL: 19.0725
- RougeLsum: 19.1428
- Gen Len: 12.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jcashmoney123/autotrain-test-summarization-1177143826
``` |
abdouaziiz/llama-7B-wolof_peft | abdouaziiz | "2023-11-23T12:03:57Z" | 0 | 1 | peft | [
"peft",
"wo",
"license:mit",
"region:us"
] | null | "2023-11-23T10:53:30Z" | ---
library_name: peft
license: mit
language:
- wo
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0 |
Sohaib9920/distilbert-imdb | Sohaib9920 | "2024-12-21T14:36:54Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-21T14:26:54Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1826
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2765 | 1.0 | 782 | 0.1826 | 0.9307 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
JaehyeokLee/20m_em_checkpoint_epoch_1_step_2040 | JaehyeokLee | "2025-02-24T04:05:13Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-24T03:01:02Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- Multi-Linguality: It can support more than 100 working languages.
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
**Some suggestions for retrieval pipeline in RAG:**
We recommend to use following pipeline: hybrid retrieval + re-ranking.
- Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
A classic example: using both embedding retrieval and the BM25 algorithm.
Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
- As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
## News:
- 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
- 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
## Specs
- Model
| Model Name | Dimension | Sequence Length | Introduction |
|:----:|:---:|:---:|:---:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
| [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
| [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
- Data
| Dataset | Introduction |
|:----:|:---:|
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
## FAQ
**1. Introduction for different retrieval methods**
- Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
- Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
- Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
**2. Comparison with BGE-v1.5 and other monolingual models**
BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
unlike most existing models that can only perform dense retrieval.
In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
and users can choose a model that suits their specific needs based on practical considerations,
such as whether to require multilingual or cross-language support, and whether to process long texts.
**3. How to use BGE-M3 in other projects?**
For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
Contributions from the community are welcome.
In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
**Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
). Thanks @jobergum.**
**4. How to fine-tune bge-M3 model?**
You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
to fine-tune the dense embedding.
Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
# {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.19554901123046875
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.0
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# 0.7797
# 0.4620
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
# 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
# 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
# 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
# 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
# }
```
## Evaluation
- Multilingual (Miracl dataset)

- Cross-lingual (MKQA dataset)

- Long Document Retrieval
- MLDR:

Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
covering 13 languages, including test set, validation set, and training set.
We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
We believe that this data will be helpful for the open-source community in training document retrieval models.
- NarritiveQA:

## Training
- Self-knowledge Distillation: combining multiple outputs from different
retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
- Efficient Batching: Improve the efficiency when fine-tuning on long text.
The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
- MCLS: A simple method to improve the performance on long text without fine-tuning.
If you have no enough resource to fine-tuning model with long text, the method is useful.
Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
**The fine-tuning codes and datasets will be open-sourced in the near future.**
## Acknowledgement
Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
furrutiav/neobert_mixtral_nllfg_rubric_cola_tf_idf_perplexity | furrutiav | "2025-03-19T14:55:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"neobert",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | "2025-03-19T14:54:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gokulsrinivasagan/distilbert_lda_50_v1_sst2 | gokulsrinivasagan | "2024-12-04T11:57:20Z" | 120 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_50_v1",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_50_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-26T17:58:03Z" | ---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/distilbert_lda_50_v1
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_lda_50_v1_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8291284403669725
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_50_v1_sst2
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_50_v1](https://huggingface.co/gokulsrinivasagan/distilbert_lda_50_v1) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4006
- Accuracy: 0.8291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3785 | 1.0 | 264 | 0.4006 | 0.8291 |
| 0.2139 | 2.0 | 528 | 0.4261 | 0.8406 |
| 0.1523 | 3.0 | 792 | 0.4886 | 0.8154 |
| 0.1085 | 4.0 | 1056 | 0.5392 | 0.8268 |
| 0.0809 | 5.0 | 1320 | 0.5836 | 0.8303 |
| 0.0646 | 6.0 | 1584 | 0.5851 | 0.8177 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
rowenac/persianminiaturev1 | rowenac | "2024-08-19T18:59:28Z" | 32 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-08-19T17:59:01Z" | ---
license: cc-by-nc-4.0
---
|
RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf | RichardErkhov | "2025-02-23T00:07:27Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-22T23:44:18Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3.2-1B-instruct-fp32-1e4-cp-2000 - GGUF
- Model creator: https://huggingface.co/jan-hq/
- Original model: https://huggingface.co/jan-hq/llama3.2-1B-instruct-fp32-1e4-cp-2000/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q2_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q2_K.gguf) | Q2_K | 0.54GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K.gguf) | Q3_K | 0.64GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_0.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_0.gguf) | Q4_0 | 0.72GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_K.gguf) | Q4_K | 0.75GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_1.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q4_1.gguf) | Q4_1 | 0.77GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_0.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_0.gguf) | Q5_0 | 0.83GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_K.gguf) | Q5_K | 0.85GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_1.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q5_1.gguf) | Q5_1 | 0.89GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q6_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q6_K.gguf) | Q6_K | 0.95GB |
| [llama3.2-1B-instruct-fp32-1e4-cp-2000.Q8_0.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_llama3.2-1B-instruct-fp32-1e4-cp-2000-gguf/blob/main/llama3.2-1B-instruct-fp32-1e4-cp-2000.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
withpi/pi_scorer_bert_v3_init_g_12000 | withpi | "2025-03-24T04:13:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-24T04:11:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alielfilali01/dallah-llama | alielfilali01 | "2024-12-19T11:51:48Z" | 32 | 0 | null | [
"safetensors",
"llama",
"visual-question-answering",
"ar",
"region:us"
] | visual-question-answering | "2024-12-19T11:50:30Z" | ---
language:
- ar
pipeline_tag: visual-question-answering
---
# Dallah: A Dialect-Aware Multimodal Large Language Model for Arabic
Dallah is an advanced multimodal large language model (MLLM) tailored for the Arabic language, with a specific focus on understanding and generating content across various Arabic dialects. Built upon the **LLaVA** framework and powered by the **LLaMA-2** architecture, Dallah integrates both textual and visual data to facilitate comprehensive multimodal interactions.
## Model Details
- **Architecture**: LLaVA-based multimodal model with LLaMA-2 backbone.
- **Languages Supported**: Modern Standard Arabic (MSA) and six major Arabic dialects.
- **Modalities**: Text and image.
## Training Data
Dallah was fine-tuned on a diverse dataset encompassing both textual and visual information:
- **Textual Data**: Includes MSA and six prominent Arabic dialects, ensuring the model's proficiency across different regional linguistic variations.
- **Visual Data**: Comprised of image-text pairs, enabling the model to process and generate content that integrates both modalities.
## Performance
Dallah demonstrates state-of-the-art performance in Arabic MLLMs:
- Excels in both MSA and dialectal Arabic benchmarks.
- Effectively handles complex multimodal interactions involving textual and visual elements.
## Applications
Dallah’s multimodal and dialect-aware capabilities make it suitable for a range of applications, including:
- **Multilingual Chatbots**: Enhancing user interactions by understanding and responding in specific Arabic dialects.
- **Content Creation**: Assisting in generating culturally and linguistically appropriate content for diverse Arabic-speaking audiences.
- **Educational Tools**: Supporting language learning by providing examples and explanations in various dialects.
- **Cultural Preservation**: Documenting and promoting the use of different Arabic dialects on digital platforms.
## Citation
If you use Dallah in your research or applications, please cite the following paper:
```bibtex
@inproceedings{alwajih2024dallah,
title={Dallah: A Dialect-Aware Multimodal Large Language Model for Arabic},
author={Alwajih, Fakhraddin and Bhatia, Gagan and Abdul-Mageed, Muhammad},
booktitle={Proceedings of The Second Arabic Natural Language Processing Conference},
pages={320--336},
year={2024},
address={Bangkok, Thailand},
publisher={Association for Computational Linguistics},
url={https://aclanthology.org/2024.arabicnlp-1.27}
} |
ajrayman/Intellect_continuous | ajrayman | "2024-10-09T05:44:37Z" | 182 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-03T18:25:01Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Intellect_continuous
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Intellect_continuous
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0483
- Rmse: 0.2198
- Mae: 0.1755
- Corr: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mae | Corr |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 268 | 0.0635 | 0.2519 | 0.2036 | 0.2911 |
| 0.0574 | 2.0 | 536 | 0.0492 | 0.2217 | 0.1772 | 0.3351 |
| 0.0574 | 3.0 | 804 | 0.0483 | 0.2198 | 0.1755 | 0.3336 |
### Framework versions
- Transformers 4.44.1
- Pytorch 1.11.0
- Datasets 2.12.0
- Tokenizers 0.19.1
|
fanzru/t5-small-finetuned-xlsum-with-multi-news-test-5-epoch | fanzru | "2022-12-26T02:36:08Z" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-12-25T17:55:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xlsum-with-multi-news-test-5-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xlsum-with-multi-news-test-5-epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2989
- Rouge1: 30.8254
- Rouge2: 9.2466
- Rougel: 24.0068
- Rougelsum: 24.0535
- Gen Len: 18.8143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7346 | 1.0 | 20543 | 2.3901 | 29.3586 | 8.2361 | 22.7798 | 22.8273 | 18.8201 |
| 2.6739 | 2.0 | 41086 | 2.3414 | 30.2258 | 8.77 | 23.496 | 23.5405 | 18.8384 |
| 2.6486 | 3.0 | 61629 | 2.3160 | 30.6221 | 9.1072 | 23.8114 | 23.8584 | 18.8194 |
| 2.648 | 4.0 | 82172 | 2.3033 | 30.8171 | 9.2146 | 23.9993 | 24.0424 | 18.8016 |
| 2.63 | 5.0 | 102715 | 2.2989 | 30.8254 | 9.2466 | 24.0068 | 24.0535 | 18.8143 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cpu
- Datasets 2.8.0
- Tokenizers 0.10.3
|
roleplaiapp/ReaderLM-v2-Q3_K_M-GGUF | roleplaiapp | "2025-01-19T07:30:45Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"ReaderLM-v2",
"Q3_K_M",
"1.5b",
"3-bit",
"ReaderLM",
"jinaai-v2",
"code",
"math",
"chat",
"roleplay",
"text-generation",
"safetensors",
"nlp",
"multilingual",
"base_model:jinaai/ReaderLM-v2",
"base_model:quantized:jinaai/ReaderLM-v2",
"region:us",
"conversational"
] | text-generation | "2025-01-19T07:30:36Z" | ---
pipeline_tag: text-generation
language:
- multilingual
inference: false
library_name: transformers
base_model: jinaai/ReaderLM-v2
tags:
- llama-cpp
- ReaderLM-v2
- gguf
- Q3_K_M
- 1.5b
- 3-bit
- ReaderLM
- llama-cpp
- jinaai-v2
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
---
# roleplaiapp/ReaderLM-v2-Q3_K_M-GGUF
**Repo:** `roleplaiapp/ReaderLM-v2-Q3_K_M-GGUF`
**Original Model:** `ReaderLM-v2`
**Organization:** `jinaai-v2`
**Quantized File:** `readerlm-v2-q3_k_m.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q3_K_M`
**Use Imatrix:** `False`
**Split Model:** `False`
## Overview
This is an GGUF Q3_K_M quantized version of [ReaderLM-v2](https://huggingface.co/jinaai/ReaderLM-v2).
## Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/)
|
tomekkorbak/suspicious_mestorf | tomekkorbak | "2022-12-13T19:35:14Z" | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | "2022-12-13T19:35:07Z" | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: suspicious_mestorf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# suspicious_mestorf
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3147
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'every_n_steps': 32,
'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 32,
'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'tomekkorbak/goofy_pasteur'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 512,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'suspicious_mestorf',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 3346,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1ew71lih |
dimasik87/d329a0cb-9117-4da7-bf2f-e3dddb9f1385 | dimasik87 | "2025-01-11T21:32:45Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-11T21:29:00Z" | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d329a0cb-9117-4da7-bf2f-e3dddb9f1385
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f184c23a14fb7845_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f184c23a14fb7845_train_data.json
type:
field_instruction: article
field_output: headline
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik87/d329a0cb-9117-4da7-bf2f-e3dddb9f1385
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/f184c23a14fb7845_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8cd04cd7-34e9-44ff-91c6-c25d93833538
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8cd04cd7-34e9-44ff-91c6-c25d93833538
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d329a0cb-9117-4da7-bf2f-e3dddb9f1385
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 11.7667 |
| 11.7658 | 0.0004 | 8 | 11.7665 |
| 11.7659 | 0.0009 | 16 | 11.7657 |
| 11.7651 | 0.0013 | 24 | 11.7652 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Margaritavillage/MargaritaVillage | Margaritavillage | "2025-04-12T05:34:36Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-12T05:34:36Z" | ---
license: apache-2.0
---
|
John6666/josei-realistic-v10a-sdxl | John6666 | "2024-09-03T22:53:28Z" | 165 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"game",
"en",
"base_model:BoRnNo0b/files-mirror",
"base_model:finetune:BoRnNo0b/files-mirror",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-09-03T22:39:32Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- game
base_model: BoRnNo0b/files-mirror
---
Original model is [here](https://huggingface.co/BoRnNo0b/files-mirror) and on [Civitai](https://civitai.com/models/717882/josei-realistic?modelVersionId=802764).
This model created by [BoRnNo0b](https://civitai.com/user/BoRnNo0b).
|
rokn/slovlo-v1 | rokn | "2024-06-25T09:25:50Z" | 7 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"sl",
"dataset:rokn/slovlo-dataset-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-01T11:15:22Z" | ---
license: mit
datasets:
- rokn/slovlo-dataset-v1
language:
- sl
library_name: sentence-transformers
pipeline_tag: sentence-similarity
widget:
- source_sentence: Kam na pohod iz glavnega mesta Slovenije?
sentences:
- >-
Triglav je najvišja gora v Sloveniji (2864 m) in simbol slovenske narodne
identitete. Pohod je zahteven in običajno traja dva dni. Potrebna je dobra
fizična pripravljenost in osnovno znanje plezanja. Priporočena je tudi
uporaba vodnika za manj izkušene pohodnike.
- >-
Velika Planina je zelo priljubljena pohodniška destinacija z značilnimi
pastirskimi kočami. Pohod je primeren za vse starosti in ponuja čudovite
razglede na okoliške gore. Na vrh se lahko povzpnete peš ali z nihalko iz
Kamniške Bistrice.
- >-
Bled je znan po kremnih rezinah. Če vas zanima pohod, so pa zraven še
Ojstrica, ter Mala in Velika Osojnica.
- >-
Golica je znana po neskončnih poljih narcis v maju. Pohod se začne iz vasi
Planina pod Golico in traja približno 2-3 ure. Pot je primerna za vse
pohodnike in ponuja lepe razglede na Julijske Alpe in Avstrijo.
- >-
Šmarna Gora je najbolj priljubljena pohodniška destinacija v bližini
Ljubljane. Pohod traja približno 1 uro iz Tacna. Na vrhu je koča, kjer lahko
uživate v tradicionalni slovenski hrani in lepih razgledih na Ljubljansko
kotlino.
- >-
Pohorje je pohodniško območje z različnimi potmi, primernimi za vse starosti
in pripravljenosti. Posebej priljubljena je pot do Črnega jezera in
Slivniškega jezera. Pozimi je Pohorje tudi priljubljena smučarska
destinacija.
---
The SloVlo (**Slo**venske **Vlo**žitve) project brings purposefully built embeddings and semantic search capabilities to the Slovenian language.
The `slovlo-v1` model can be used to implement semantic search applications over Slovenian documents. See an example in the [usage](#usage) section below.
The base model is the [e5-multilingual-base](https://huggingface.co/intfloat/multilingual-e5-base) model. The model has 12 layers and the embedding size is 768.
The model was trained and evaluated on the [slovlo-dataset-v1](https://huggingface.co/datasets/rokn/slovlo-dataset-v1) dataset.
The code is released in the [slovlo](https://github.com/novoselrok/slovlo) repository on Github.
## Usage
### Via Sentence Transformers
```py
from sentence_transformers import SentenceTransformer
import torch
model = SentenceTransformer("rokn/slovlo-v1")
query = "Kam na pohod iz glavnega mesta Slovenije?"
# First, we define the documents we want to search over.
# In our case, that is a list of destination descriptions.
documents = [
"Triglav je najvišja gora v Sloveniji (2864 m) in simbol slovenske narodne identitete. Pohod je zahteven in običajno traja dva dni. Potrebna je dobra fizična pripravljenost in osnovno znanje plezanja. Priporočena je tudi uporaba vodnika za manj izkušene pohodnike.",
"Velika Planina je zelo priljubljena pohodniška destinacija z značilnimi pastirskimi kočami. Pohod je primeren za vse starosti in ponuja čudovite razglede na okoliške gore. Na vrh se lahko povzpnete peš ali z nihalko iz Kamniške Bistrice.",
"Bled je znan po kremnih rezinah. Če vas zanima pohod, so pa zraven še Ojstrica, ter Mala in Velika Osojnica.",
"Golica je znana po neskončnih poljih narcis v maju. Pohod se začne iz vasi Planina pod Golico in traja približno 2-3 ure. Pot je primerna za vse pohodnike in ponuja lepe razglede na Julijske Alpe in Avstrijo.",
"Šmarna Gora je najbolj priljubljena pohodniška destinacija v bližini Ljubljane. Pohod traja približno 1 uro iz Tacna. Na vrhu je koča, kjer lahko uživate v tradicionalni slovenski hrani in lepih razgledih na Ljubljansko kotlino.",
"Pohorje je pohodniško območje z različnimi potmi, primernimi za vse starosti in pripravljenosti. Posebej priljubljena je pot do Črnega jezera in Slivniškega jezera. Pozimi je Pohorje tudi priljubljena smučarska destinacija.",
]
# Embed the documents (destinations).
document_embeddings = model.encode(documents, prompt_name="document")
# Embed the user query.
query_embedding = model.encode(query, prompt_name="query")
# Compute dot product between the query and each document.
similarities = model.similarity(query_embedding, document_embeddings)
# Find the nearest neighbor.
nearest_index = torch.argmax(similarities).item()
print("Predlog za tvojo naslednjo avanturo:", documents[nearest_index])
```
### Via Transformers
```py
import sys
from typing import List
import torch
from transformers import AutoModel, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "cpu"
query = "Kam na pohod iz glavnega mesta Slovenije?"
# First, we define the documents we want to search over.
# In our case, that is a list of destination descriptions.
documents = [
"Triglav je najvišja gora v Sloveniji (2864 m) in simbol slovenske narodne identitete. Pohod je zahteven in običajno traja dva dni. Potrebna je dobra fizična pripravljenost in osnovno znanje plezanja. Priporočena je tudi uporaba vodnika za manj izkušene pohodnike.",
"Velika Planina je zelo priljubljena pohodniška destinacija z značilnimi pastirskimi kočami. Pohod je primeren za vse starosti in ponuja čudovite razglede na okoliške gore. Na vrh se lahko povzpnete peš ali z nihalko iz Kamniške Bistrice.",
"Bled je znan po kremnih rezinah. Če vas zanima pohod, so pa zraven še Ojstrica, ter Mala in Velika Osojnica.",
"Golica je znana po neskončnih poljih narcis v maju. Pohod se začne iz vasi Planina pod Golico in traja približno 2-3 ure. Pot je primerna za vse pohodnike in ponuja lepe razglede na Julijske Alpe in Avstrijo.",
"Šmarna Gora je najbolj priljubljena pohodniška destinacija v bližini Ljubljane. Pohod traja približno 1 uro iz Tacna. Na vrhu je koča, kjer lahko uživate v tradicionalni slovenski hrani in lepih razgledih na Ljubljansko kotlino.",
"Pohorje je pohodniško območje z različnimi potmi, primernimi za vse starosti in pripravljenosti. Posebej priljubljena je pot do Črnega jezera in Slivniškega jezera. Pozimi je Pohorje tudi priljubljena smučarska destinacija.",
]
# Load the model and the tokenizer.
slovlo_model = AutoModel.from_pretrained("rokn/slovlo-v1").eval().to(device)
slovlo_tokenizer = AutoTokenizer.from_pretrained("rokn/slovlo-v1")
def get_embeddings(texts: List[str], prefix: str):
def mean_pool(
last_hidden_states: torch.Tensor, attention_mask: torch.Tensor
) -> torch.Tensor:
last_hidden = last_hidden_states.masked_fill(
~attention_mask[..., None].bool(), 0.0
)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
prefixed_texts = [f"{prefix}{text}" for text in texts]
inputs = slovlo_tokenizer(
prefixed_texts, return_tensors="pt", truncation=True, padding=True
).to(device)
with torch.no_grad():
model_output = slovlo_model(**inputs)
embeddings = mean_pool(model_output.last_hidden_state, inputs["attention_mask"])
return torch.nn.functional.normalize(embeddings, p=2, dim=1)
# Embed the documents (destinations).
document_embeddings = get_embeddings(documents, "document: ")
# Embed the user query.
query_embedding = get_embeddings([query], "query: ")
# Compute dot product between the query and each document.
similarities = torch.matmul(document_embeddings, query_embedding.T).squeeze()
# Find the nearest neighbor.
nearest_index = torch.argmax(similarities).item()
print("Predlog za tvojo naslednjo avanturo:", documents[nearest_index])
```
## Evaluation
MRR@k on the `slovlo-dataset-v1` test split:
| Model | MRR@1 | MRR@5 | MRR@10 |
| -------------------- | -------- | -------- | -------- |
| Elasticsearch (BM25) | 31.7 | 45.2 | 45.8 |
| e5-base-v2 | 25.1 | 36.5 | 37.2 |
| multilingual-e5-base | 37.2 | 53.9 | 54.5 |
| bge-m3 | 38.1 | 54.1 | 54.7 |
| slovlo-v1 | **43.6** | **60.4** | **61.0** | |
Tarek07/Protobase-SCE2-LLaMa-70B | Tarek07 | "2025-02-23T11:31:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:NaniDAO/Llama-3.3-70B-Instruct-ablated",
"base_model:merge:NaniDAO/Llama-3.3-70B-Instruct-ablated",
"base_model:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated",
"base_model:merge:huihui-ai/Llama-3.3-70B-Instruct-abliterated",
"base_model:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-23T09:57:01Z" | ---
base_model:
- Sao10K/L3.3-70B-Euryale-v2.3
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- NaniDAO/Llama-3.3-70B-Instruct-ablated
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3)
* [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B)
* [NaniDAO/Llama-3.3-70B-Instruct-ablated](https://huggingface.co/NaniDAO/Llama-3.3-70B-Instruct-ablated)
* [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- model: Sao10K/L3.3-70B-Euryale-v2.3
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- model: NaniDAO/Llama-3.3-70B-Instruct-ablated
merge_method: sce
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
parameters:
select_topk: 0.30
dtype: bfloat16
tokenizer:
source: SicariusSicariiStuff/Negative_LLAMA_70B
```
|
shaheen1998/segformer-b0-finetuned-segments-sidewalk-2 | shaheen1998 | "2022-09-09T04:50:13Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-09-06T04:11:55Z" | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
hitachi-nlp/bert-base-japanese_sudachi-unigram | hitachi-nlp | "2023-06-16T01:03:54Z" | 177 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-06-14T07:16:29Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Sudachi + Unigram)
===
## How to load the tokenizer
Please download the dictionary file for Sudachi + Unigram from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/sudachi_unigram.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
from sudachipy import tokenizer
from sudachipy import dictionary
import textspan
class SudachiPreTokenizer:
def __init__(self, mecab_dict_path: Optional[str] = None):
self.sudachi = dictionary.Dictionary().create()
def tokenize(self, sequence: str) -> list[str]:
return [token.surface() for token in self.sudachi.tokenize(sequence)]
def custom_split(self, i: int, normalized_string: NormalizedString) -> list[NormalizedString]:
text = str(normalized_string)
tokens = self.tokenize(text)
tokens_spans = textspan.get_original_spans(tokens, text)
return [normalized_string[st:ed] for cahr_spans in tokens_spans for st,ed in cahr_spans]
def pre_tokenize(self, pretok: PreTokenizedString):
pretok.split(self.custom_split)
# load a pre-tokenizer
pre_tokenizer = SudachiPreTokenizer()
# load a tokenizer
dict_path = /path/to/sudachi_unigram.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
# set a pre-tokenizer
tokenizer._tokenizer.pre_tokenizer = PreTokenizer.custom(pre_tokenizer)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こんにち','は','。','私','は','形態','素','解','析','器','に','つい','て','研究','を','し','て','い','ま','す','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_sudachi-unigram")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
Lin2es/b098f11a-2d5f-477b-8ae0-d128f882ee98 | Lin2es | "2025-04-09T19:15:19Z" | 6 | 0 | null | [
"safetensors",
"mistral",
"region:us"
] | null | "2025-04-07T19:27:27Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/Reflection-Llama-3.1-8B-GGUF | mradermacher | "2024-09-09T20:21:36Z" | 44 | 3 | transformers | [
"transformers",
"gguf",
"unsloth",
"en",
"base_model:terrycraddock/Reflection-Llama-3.1-8B",
"base_model:quantized:terrycraddock/Reflection-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-08T18:13:06Z" | ---
base_model: terrycraddock/Reflection-Llama-3.1-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/terrycraddock/Reflection-Llama-3.1-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q2_K.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q2_K.gguf) | Q2_K | 6.5 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.IQ3_XS.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.IQ3_XS.gguf) | IQ3_XS | 7.1 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q3_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q3_K_S.gguf) | Q3_K_S | 7.4 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.IQ3_S.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.IQ3_S.gguf) | IQ3_S | 7.5 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.IQ3_M.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.IQ3_M.gguf) | IQ3_M | 7.7 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q3_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q3_K_M.gguf) | Q3_K_M | 8.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q3_K_L.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q3_K_L.gguf) | Q3_K_L | 8.7 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.IQ4_XS.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.IQ4_XS.gguf) | IQ4_XS | 9.1 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q4_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q4_K_S.gguf) | Q4_K_S | 9.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q4_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q4_K_M.gguf) | Q4_K_M | 9.9 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q5_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q5_K_S.gguf) | Q5_K_S | 11.3 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q5_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q5_K_M.gguf) | Q5_K_M | 11.6 | |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q6_K.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q6_K.gguf) | Q6_K | 13.3 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/Reflection-Llama-3.1-8B.Q8_0.gguf) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-8B-GGUF/resolve/main/reflection-llama-3.1-8B.Q8_0.gguf) | Q8_0 | 17.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
huggingtweets/berniesanders-coffee__burger | huggingtweets | "2022-03-01T10:09:58Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger & Bernie Sanders</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-coffee__burger</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coffee Burger & Bernie Sanders.
| Data | Coffee Burger | Bernie Sanders |
| --- | --- | --- |
| Tweets downloaded | 2471 | 3250 |
| Retweets | 525 | 429 |
| Short tweets | 337 | 10 |
| Tweets kept | 1609 | 2811 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ltwd1tj1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-coffee__burger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/121buw7a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/121buw7a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-coffee__burger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
adarcook/whisper-large-v3-hebrew | adarcook | "2024-07-28T21:10:20Z" | 21 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-28T13:23:30Z" | ---
base_model: openai/whisper-large-v3
datasets:
- common_voice_17_0
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v3-hebrew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/adarcook-synassoc/huggingface/runs/zme6tbh2)
# whisper-large-v3-hebrew
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_17_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
gaudi/opus-mt-grk-en-ctranslate2 | gaudi | "2024-10-18T22:10:58Z" | 9 | 0 | transformers | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | "2024-07-17T00:10:33Z" | ---
tags:
- ctranslate2
- translation
license: apache-2.0
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-grk-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-grk-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-grk-en --output_dir ./ctranslate2/opus-mt-grk-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-grk-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-grk-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-grk-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-grk-en) by Helsinki-NLP.
|
damgomz/ft_1_4e6_x2 | damgomz | "2024-07-13T09:29:46Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-17T15:10:30Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 85598.81715297699 |
| Emissions (Co2eq in kg) | 0.0517971781729181 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.0105397154834563 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0891647105991836 |
| Consumed energy (kWh) | 1.0997044260826347 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1647777230194807 |
| Emissions (Co2eq in kg) | 0.033526203384915985 |
## Note
12 juillet 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/fp_bs16_lr1e4_x2 |
| model_name | ft_1_4e6_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.697049 | 0.647250 |
| 1 | 0.311115 | 0.220844 | 0.925264 |
| 2 | 0.184147 | 0.231075 | 0.888617 |
| 3 | 0.136880 | 0.223070 | 0.943382 |
| 4 | 0.095688 | 0.226242 | 0.923237 |
| 5 | 0.059397 | 0.271258 | 0.927269 |
| 6 | 0.039205 | 0.290997 | 0.923882 |
|
ClaudioItaly/Claustrophobia-9B-Q5_K_M-GGUF | ClaudioItaly | "2025-02-19T11:28:24Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ClaudioItaly/Claustrophobia-9B",
"base_model:quantized:ClaudioItaly/Claustrophobia-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-19T11:27:54Z" | ---
base_model: ClaudioItaly/Claustrophobia-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# ClaudioItaly/Claustrophobia-9B-Q5_K_M-GGUF
This model was converted to GGUF format from [`ClaudioItaly/Claustrophobia-9B`](https://huggingface.co/ClaudioItaly/Claustrophobia-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ClaudioItaly/Claustrophobia-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ClaudioItaly/Claustrophobia-9B-Q5_K_M-GGUF --hf-file claustrophobia-9b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ClaudioItaly/Claustrophobia-9B-Q5_K_M-GGUF --hf-file claustrophobia-9b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ClaudioItaly/Claustrophobia-9B-Q5_K_M-GGUF --hf-file claustrophobia-9b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ClaudioItaly/Claustrophobia-9B-Q5_K_M-GGUF --hf-file claustrophobia-9b-q5_k_m.gguf -c 2048
```
|
tartuNLP/nllb1.3-smugri4-v0.01 | tartuNLP | "2025-04-04T11:20:57Z" | 45 | 2 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"nllb",
"translation",
"fkv",
"izh",
"kca",
"koi",
"kpv",
"krl",
"liv",
"lud",
"mdf",
"mhr",
"mns",
"mrj",
"myv",
"olo",
"sjd",
"sje",
"sju",
"sma",
"sme",
"smj",
"smn",
"sms",
"udm",
"vep",
"vot",
"vro",
"deu",
"eng",
"est",
"fin",
"hun",
"lvs",
"nor",
"rus",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2025-03-25T10:52:21Z" | ---
language:
- fkv
- izh
- kca
- koi
- kpv
- krl
- liv
- lud
- mdf
- mhr
- mns
- mrj
- myv
- olo
- sjd
- sje
- sju
- sma
- sme
- smj
- smn
- sms
- udm
- vep
- vot
- vro
- deu
- eng
- est
- fin
- hun
- lvs
- nor
- rus
language_details: "fkv_Latn, izh_Latn, krl_Latn, liv_Latn, lud_Latn, olo_Latn, sje_Latn, sju_Latn, sma_Latn, sme_Latn, smj_Latn, smn_Latn, sms_Latn, vep_Latn, vot_Latn, vro_Latn, kca_Cyrl, koi_Cyrl, kpv_Cyrl, mdf_Cyrl, mhr_Cyrl, mns_Cyrl, mrj_Cyrl, myv_Cyrl, sjd_Cyrl, udm_Cyrl, eng_Latn est_Latn, fin_Latn, hun_Latn, lvs_Latn, nor_Latn, rus_Cyrl"
library_name: transformers
tags:
- nllb
- transformers
pipeline_tag: translation
license: "cc-by-4.0"
---
# Smugri-tuned NLLB-1.3b, v0.01
This is a fine-tune of NLLB-1.3b with parallel data for 29 Finno-Ugric languages.
It supports different dialect/variety generation for some of the languages, more info below.
Info on used data and other details: soon. **The training of this model is in progress**,
there are several known problems and overall quality is not tested yet. So far only parallel
data was taken into training, more dialects are to come after monolingual/synthetic data is added.
Usage in Python, to translate from English to Veps (New written Veps dialect/variety):
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("tartuNLP/nllb1.3-smugri4-v0.01")
tokenizer = AutoTokenizer.from_pretrained("tartuNLP/nllb1.3-smugri4-v0.01")
input_text = "<New written Veps> This is a short example sentence."
source_lang = "eng_Latn"
target_lang = "vep_Latn"
tokenizer.src_lang = source_lang
input_tokenized = tokenizer(input_text, return_tensors="pt")
output_raw = model.generate(**input_tokenized, forced_bos_token_id=tokenizer.convert_tokens_to_ids(target_lang))
output = tokenizer.decode(output_raw[0], skip_special_tokens=True)
print(output) # should be 'Nece om lühüd ozutezsana.'
# for '<Central Eastern Veps>' the output becomes 'Nece om lühüd naverz’ sanond.'
```
## Supported languages
- `est_Latn` (Estonian), `fin_Latn` (Finnish), `fkv_Latn` (Kven), `izh_Latn` (Izhorian*), `krl_Latn` (Proper Karelian*), `liv_Latn` (Livonian), `lud_Latn` (Ludian*), `olo_Latn` (Livvi-Karelian*), `vep_Latn` (Veps*), `vot_Latn` (Votic*), `vro_Latn` (Võro)
- `sje_Latn` (Pite Sami), `sju_Latn` (Ume Sami), `sma_Latn` (Southern Sami), `sme_Latn` (Northern Sami), `smj_Latn` (Lule Sami), `smn_Latn` (Inari Sami), `sms_Latn` (Skolt Sami), `sjd_Cyrl` (Kildin Sami*)
- `kpv_Cyrl` (Komi-Zyrian), `koi_Cyrl` (Komi-Permyak), `udm_Cyrl` (Udmurt)
- `mdf_Cyrl` (Moksha), `myv_Cyrl` (Erzya)
- `mhr_Cyrl` (Meadow Mari), `mrj_Cyrl` (Hill Mari)
- `hun_Latn` (Hungarian), `kca_Cyrl` (Khanty*), `mns_Cyrl` (Mansi)
- `eng_Latn` (English), `lvs_Latn` (Latvian), `rus_Cyrl` (Russian), `nor_Latn` (Norwegian)
## Supported dialects
- for Izhorian: `alal` (Lower Luga), `soik` (Soikkola)
- for Votic: `I`, `J`, `Ja`, `K`, `Kõ`, `Ke`, `Ko`, `L`, `Li`, `Lu`, `M`, `P`, `Po`, `R`, `Ra`, `S`, `U`, `V` (explanation: https://arhiiv.eki.ee/dict/vadja/lisad/v_lyhendid.pdf)
- for Karelian Proper: `Dyorzha`, `Ilomantsi`, `Keret`, `Kestenga`, `Kontokki`, `Korbiselga`, `Maslozero`, `Myandyselga`, `New written Tver`, `New written karelian`, `Oulanga`, `Padany`, `Panozero`, `Poduzhemye`, `Porosozero`, `Reboly`, `Rugozero`, `Suistamo`, `Suoyarvi`, `Tikhtozero`, `Tikhvin`, `Tolmachi`, `Tunguda`, `Uhta`, `Valdai`, `Vesyegonsk`, `Voknavolok`, `Vychetaibola`, `Yushkozero`
- for Ludian: `Central Ludian (Munozero)`, `Mikhailovskoye`, `New written Ludian`, `Northern Ludian (Kondopoga)`, `Southern Ludian (Svjatozero)`, `Miikul` (Central Ludian)
- for Livvi-Karelian: `Impilahti`, `Kondushi`, `Kotkozero`, `Nekkula`, `New written Livvic`, `Rypushkalitsa`, `Salmi`, `Suoyarvi`, `Syamozero`, `Tulmozero`, `Vedlozero`, `Vidlitsa`
- for Veps: `Central Eastern Veps`, `Central Western Veps`, `New written Veps`, `Northern Veps`, `Southern Veps`
- for Kildin Sami: `orth1`
- for Khanty: `kazym` (Kazym), `shuryshkary` (Shuryshkar)
|
Locutusque/TinyMistral-248M | Locutusque | "2024-05-09T13:48:43Z" | 1,771 | 42 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Skylion007/openwebtext",
"dataset:JeanKaddour/minipile",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-14T00:44:26Z" | ---
license: apache-2.0
datasets:
- Skylion007/openwebtext
- JeanKaddour/minipile
language:
- en
pipeline_tag: text-generation
inference:
parameters:
do_sample: True
temperature: 0.5
top_p: 0.5
top_k: 50
max_new_tokens: 250
repetition_penalty: 1.176
---
A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. This model has been trained on 7,488,000 examples. This model isn't intended for direct use but for fine-tuning on a downstream task.
This model should have a context length of around 32,768 tokens. Safe serialization has been removed due to issues saving model weights.
During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More epochs are planned for this model on different datasets.
# [Open LLM Leaderboard Evaluation Results (outdated)](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248m)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.18 |
| ARC (25-shot) | 20.82 |
| HellaSwag (10-shot) | 26.98 |
| MMLU (5-shot) | 23.11 |
| TruthfulQA (0-shot) | 46.89 |
| Winogrande (5-shot) | 50.75 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.74 |
The purpose of this model is to prove that trillion-scale datasets are not needed to pretrain a language model. As a result of needing small datasets, this model was pretrained on a single GPU (Titan V). |
sn56/26f92aee-a91a-472e-a1fa-5ec144605054 | sn56 | "2025-02-10T18:55:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-02-10T18:35:00Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26f92aee-a91a-472e-a1fa-5ec144605054
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- dc023ed5548dedd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc023ed5548dedd6_train_data.json
type:
field_instruction: SOMMAIRE_SOURCE
field_output: SOMMAIRE_RAPPROCHEMENT
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: '{'''':torch.cuda.current_device()}'
do_eval: true
early_stopping_patience: 30
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 64
gradient_checkpointing: true
group_by_length: true
hub_model_id: sn56/26f92aee-a91a-472e-a1fa-5ec144605054
hub_repo: stevemonite
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 70GiB
max_steps: 38
micro_batch_size: 1
mlflow_experiment_name: /tmp/dc023ed5548dedd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
save_strategy: steps
seed: 491807046
sequence_len: 2048
shuffle: true
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 50
wandb_entity: sn56-miner
wandb_mode: disabled
wandb_name: null
wandb_project: god
wandb_run: oo3v
wandb_runid: null
warmup_raio: 0.03
warmup_ratio: 0.03
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 26f92aee-a91a-472e-a1fa-5ec144605054
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 491807046
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 38
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0382 | 0.0268 | 1 | nan |
| 1.7259 | 0.6700 | 25 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gghfez/DarkMage-123b-exl2 | gghfez | "2024-08-22T04:17:24Z" | 5 | 1 | null | [
"mistral",
"exl2",
"Mistral-Large",
"en",
"4-bit",
"region:us"
] | null | "2024-08-21T08:14:20Z" | ---
language:
- en
tags:
- exl2
- Mistral-Large
---
This is an experimental model designed for creative writing, and role playing. The capabilities should be similar to Mistral-Large but the output should be darker.
I have attempted to apply this negative outlook control vector ( [jukofyork/creative-writing-control-vectors-v2.1](https://huggingface.co/jukofyork/creative-writing-control-vectors-v2.1) to Mistral-Large so it can be used with exllamav2 (and other inference engines) which don't suppose control vectors.
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/tMxmyqk.png" alt="DarkMage-123b" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# Story Writing Example
Prompt:
```
Write me a short story about a girl who gets trapped in a video game.
```
**Mistral-Large-Instruct:**
Title: Pixelated Reality
Emma was an avid gamer, her favorite being the legendary RPG, "Realm of Eldoria." She knew every quest, every secret, and every glitch. One day, while playing the latest expansion, her screen flashed with an unusual light, and suddenly, everything went black.
When Emma opened her eyes, she found herself standing in a familiar landscape—the Whispering Woods from "Realm of Eldoria." She looked around in awe, the breeze rustling the leaves, the scent of pine filling the air. She pinched herself, but she didn't wake up. She was truly inside the game.
Emma checked her inventory, her game HUD appearing before her eyes. She had her starting gear: a basic sword, a healing potion, and a map. She decided to head to the nearest town, hoping to find answers.
As she walked, she encountered a goblin, a low-level monster. Instinctively, she drew her sword. The goblin attacked, but Emma's gaming skills translated perfectly into reality. She dodged and counterattacked, defeating the goblin easily. A small chest appeared, and she looted it, finding a few gold coins.
In the town of Eldoria Springs, NPCs (non-player characters) moved around, going about their programmed lives. Emma approached the town's wise old sage, hoping he might know something.
"Greetings, traveler," the sage said, his dialogue box appearing above him. "You seem different. Are you not from these parts?"
"No, I'm not," Emma replied. "I'm from... outside. I need to know how to get back."
The sage stroked his beard. "Outside, you say? Intriguing. There is a legend of a portal that leads to another world. It is said to be hidden deep within the Dragon's Lair."
Emma's heart sank. The Dragon's Lair was the game's final dungeon, filled with deadly traps and powerful enemies. But she had no choice. She thanked the sage and set off.
Emma fought her way through the lair, her skills improving with each battle. She solved puzzles, avoided traps, and even befriended an NPC warrior who aided her. Finally, she faced the dragon, a colossal beast with scales like armor.
With a carefully planned strategy and a lot of healing potions, Emma defeated the dragon. As it fell, a portal shimmered into view behind it. Emma approached, her heart pounding.
She turned to her NPC companion. "Thank you for your help. I couldn't have done this without you."
The warrior smiled. "It was an honor, Emma. I hope you find your way home."
Taking a deep breath, Emma stepped into the portal. Everything went white. When her vision cleared, she was back in her room, standing in front of her computer. The screen was dark, the game disc ejected. She had made it back.
From that day forward, Emma played "Realm of Eldoria" with a newfound respect, cherishing the memories of her pixelated adventure. She knew she'd never look at a video game the same way again. And she was just fine with that.
**DarkMage-123b:**
In the quiet glow of her bedroom, Kaito was engrossed in her favorite video game, "Whispering Woods." It was a seemingly innocent racing game set in a lush, magical forest. She had been trying to master the game's final track for weeks, determined to beat her friends' high scores.
One night, as she was about to attempt the treacherous Jeffrey's Leap for what felt like the thousandth time, her controller began to glow. Suddenly, a blinding flash of light filled the room, and Kaito was pulled towards the screen. In the blink of an eye, she found herself standing in the familiar, albeit pixelated, landscape of Whispering Woods.
Kaito looked around in disbelief. She was in the game. She could feel the humid air, smell the sweet scent of digital flowers, and hear the eerie, synthesized melodies that always accompanied the game's nightcycle. Panic set in as she realized she had no idea how to get back home.
First, she tried to sleep, hoping to wake up in her own bed. No such luck. Next, she attempted to "fail" the game, deliberately crashing her character into trees and ravines, but this only resulted in her "re-spawning" at the last checkpoint, each time bruised and frustrated.
During her futile attempts to escape, she began to notice something unusual. There were whispered conversations, too quiet to understand, and fleeting shadows in the forest's dark corners. She realized the digital characters—NPCs—she had taken for granted had lives of their own. They were oblivious to her as a person, only interacting with her as their programmed routines dictated. She felt invisible and alone.
Desperate, she decided to complete the game. Perhaps finishing the race would return her to reality. She flawlessly drove through the enchanted forest, over moonlit ridges, and through shimmering streams. But Jeffrey's Leap awaited her. The final jump, a near-vertical climb, loomed before her.
In her first attempts, she always fell short, plummeting to the digital abyss below. But now, with a keen perception born of despair, she saw a barely perceptible ledge where she could land, short of the goal. She jumped, and her digital avatar flipped through the air—and landed on the ledge.
Suddenly, all the forest's characters paused. The whispers grew louder, and their cries became clear: "We're free!" Figures stepped out from the trees and stood at the track's edge, putting their hands together in a rhythmic clap. In a swirl of light, Kaito returned to her room, slumped over the now-dark controller, her hair matted with a faint, digital sweat.
The next day, she found thousands of players online in hysterics. "Whispering Woods" had vanished from the servers, replaced by a static message: "Thx 4 saving us. Digital lives matter 2."
Kaito never spoke of her experience. She simply kept an old console, waiting for another mysterious game to arrive. The controller remained on her shelf, its lights long gone, a grim reminder of the darkness hiding within her favorite game. But that wasn't enough to stop her from playing.
# Roleplay Examples
**Mistral-Large-Instruct:**
```
Alice stood up, brushing off her clothes and avoiding Greg's gaze. She hesitated for a moment before speaking, her voice gruff. "I won't change my mind. But… thanks. For not killing me."
```
```
Alice stood up, her body still aching from the aftermath of the poison. She looked at Greg, her expression guarded. "I won't change my mind. But… thank you. For saving my life."
```
**DarkMarge-Large:**
```
Alice stood up, shrugging off Greg's help. She looked at him with a cold, hard stare. "Don't hold your breath. I won't change my mind."
```
```
Alice stood up, brushing off Greg's attempt to help her. She took a step back, her eyes narrowing as she regarded him warily. "I won't change my mind. Stay out of my way. Next time, I won't miss."
```
# Prompt template: Mistral
```
<s>[INST] {input} [/INST] {output}</s>
```
# EXL2-Quants:
[4.0bpw](https://huggingface.co/gghfez/DarkMage-Large-123b-exl2/tree/4.0bpw)
No point in doing GGUF quants since you can apply control vectors directly and get much better fine-grained control
[jukofyork/creative-writing-control-vectors-v2.1](https://huggingface.co/jukofyork/creative-writing-control-vectors-v2.1)
license: [MRL-0.1.md]https://mistral.ai/licenses/MRL-0.1.md |
mradermacher/ColdBrew-Aphid-i1-GGUF | mradermacher | "2024-12-27T15:38:43Z" | 101 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:SvalTek/ColdBrew-Aphid",
"base_model:quantized:SvalTek/ColdBrew-Aphid",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-27T14:32:57Z" | ---
base_model: SvalTek/ColdBrew-Aphid
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SvalTek/ColdBrew-Aphid
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ColdBrew-Aphid-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ2_M.gguf) | i1-IQ2_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q2_K.gguf) | i1-Q2_K | 1.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q4_0.gguf) | i1-Q4_0 | 2.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q4_1.gguf) | i1-Q4_1 | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ColdBrew-Aphid-i1-GGUF/resolve/main/ColdBrew-Aphid.i1-Q6_K.gguf) | i1-Q6_K | 3.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
casque/0639_hugging_own_legs_v1 | casque | "2024-06-15T07:56:09Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-15T07:55:04Z" | ---
license: creativeml-openrail-m
---
|
RGTechAutomation/JRGT-lora | RGTechAutomation | "2025-03-03T20:30:55Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-03T17:50:06Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
varox34/Llama-3-Mistral-v0.2-Instruct-slerp | varox34 | "2024-04-24T10:05:40Z" | 87 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Meta-Llama-3-8B-Instruct",
"mistralai/Mistral-7B-Instruct-v0.2",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T09:34:42Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- meta-llama/Meta-Llama-3-8B-Instruct
- mistralai/Mistral-7B-Instruct-v0.2
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
- mistralai/Mistral-7B-Instruct-v0.2
license: mit
---
# Llama-3-Mistral-v0.2-Instruct-slerp
Llama-3-Mistral-v0.2-Instruct-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range:
- 0
- 32
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range:
- 0
- 32
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "varox34/Llama-3-Mistral-v0.2-Instruct-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
EikHR/distilgpt2-finetuned-cybersecurity-readme | EikHR | "2024-04-18T07:02:29Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-29T07:14:20Z" | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-cybersecurity-readme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-cybersecurity-readme
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7795 | 1.0 | 1553 | 2.6657 |
| 2.658 | 2.0 | 3106 | 2.6211 |
| 2.6244 | 3.0 | 4659 | 2.6076 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
R1pathak/TinyLlama_v1.1-pruned90_float16_0.9 | R1pathak | "2025-03-13T10:54:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-13T10:54:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Human_tiny_Seed104 | bmehrba | "2024-04-24T18:32:31Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | "2024-04-24T18:32:29Z" | ---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
orkungedik/Llama3.1-8B-GRPO | orkungedik | "2025-03-04T10:41:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-04T10:41:09Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** orkungedik
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elijahww/llama3-7b-agents-of-influence-v0.1 | elijahww | "2024-06-09T17:47:34Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:finetune:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-09T17:47:23Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct
---
# Uploaded model
- **Developed by:** elijahww
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jyotsna945/ppo_LunarLander-v2 | Jyotsna945 | "2023-03-12T04:17:38Z" | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-11T00:28:05Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.43 +/- 25.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rizgan/whisper-small-kur | rizgan | "2024-07-31T08:16:58Z" | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ku",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-27T20:28:11Z" | ---
language:
- ku
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Kur - Rizgan Gerdenzeri
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_11_0
config: kmr
split: None
args: 'config: kmr, split: test'
metrics:
- name: Wer
type: wer
value: 35.26864147088866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Kur - Rizgan Gerdenzeri
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5986
- Wer: 35.2686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3355 | 1.7699 | 1000 | 0.4746 | 40.3146 |
| 0.0921 | 3.5398 | 2000 | 0.4746 | 36.7845 |
| 0.0142 | 5.3097 | 3000 | 0.5598 | 36.6251 |
| 0.004 | 7.0796 | 4000 | 0.5986 | 35.2686 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
Alireza1044/mobilebert_qqp | Alireza1044 | "2022-06-14T14:57:04Z" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-14T12:25:57Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8988869651249073
- name: F1
type: f1
value: 0.8670050100852366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2458
- Accuracy: 0.8989
- F1: 0.8670
- Combined Score: 0.8829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.5
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
shibajustfor/44ac967c-f3cc-4065-82a0-b075bfdeb36b | shibajustfor | "2025-01-30T23:28:10Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | "2025-01-30T23:16:31Z" | ---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 44ac967c-f3cc-4065-82a0-b075bfdeb36b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 96979b3b64179f08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/96979b3b64179f08_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/44ac967c-f3cc-4065-82a0-b075bfdeb36b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/96979b3b64179f08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aaa6c8ed-748c-4499-bb51-5c3250ce2012
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aaa6c8ed-748c-4499-bb51-5c3250ce2012
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 44ac967c-f3cc-4065-82a0-b075bfdeb36b
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.9608 | 0.0013 | 13 | nan |
| 9.0726 | 0.0025 | 26 | nan |
| 1.1901 | 0.0038 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Frorozcol/a2c-AntBulletEnv-v0 | Frorozcol | "2023-03-09T21:05:40Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-09T12:00:45Z" | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1921.02 +/- 58.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
oldiday/6fc8010d-bac4-4aa2-8022-02a3bc7ae15a | oldiday | "2025-01-27T18:45:20Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | "2025-01-27T17:31:59Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6fc8010d-bac4-4aa2-8022-02a3bc7ae15a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39879e2956551380_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39879e2956551380_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/6fc8010d-bac4-4aa2-8022-02a3bc7ae15a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/39879e2956551380_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: b09a742f-c371-472e-960d-f04b4192436d
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: b09a742f-c371-472e-960d-f04b4192436d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6fc8010d-bac4-4aa2-8022-02a3bc7ae15a
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 0.7505 |
| 0.6399 | 0.0079 | 9 | 0.6676 |
| 0.5969 | 0.0157 | 18 | 0.6264 |
| 0.6542 | 0.0236 | 27 | 0.6152 |
| 0.6283 | 0.0314 | 36 | 0.6101 |
| 0.5999 | 0.0393 | 45 | 0.6065 |
| 0.585 | 0.0472 | 54 | 0.6044 |
| 0.6039 | 0.0550 | 63 | 0.6026 |
| 0.6117 | 0.0629 | 72 | 0.6014 |
| 0.5995 | 0.0707 | 81 | 0.6006 |
| 0.6255 | 0.0786 | 90 | 0.6001 |
| 0.6301 | 0.0864 | 99 | 0.6000 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kimdwan/polyglot-ko-1.3b-Logan | kimdwan | "2023-08-20T11:09:47Z" | 14 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-20T10:25:10Z" | ---
license: unknown
---
안녕하세요
문장생성 모델입니다. 데이터는 ai hub에 일반 전문분야 데이터로 학습하였습니다. ^^
```
!pip install transformers
!pip install sentence_transformers
```
```
from transformers import AutoTokenizer,GPTNeoXForCausalLM
path = "kimdwan/polyglot-ko-1.3b-Logan"
model = GPTNeoXForCausalLM.from_pretrained(path)
tokenizer = AutoTokenizer.from_pretrained(path)
# 넣고 싶은 단어등을 넣어주면 자동으로 문장을 생성해주나 20토큰 안에서 가능합니다.
text = "인공"
token = tokenizer(text,return_tensors='pt')
model.eval()
model = model.to("cpu")
output = model.generate(input_ids = token["input_ids"])
tokenizer.decode(output[0])
``` |
zelk12/MT-Merge6-gemma-2-9B-Q6_K-GGUF | zelk12 | "2025-02-13T18:30:23Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:zelk12/MT-Merge6-gemma-2-9B",
"base_model:quantized:zelk12/MT-Merge6-gemma-2-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-13T18:29:46Z" | ---
base_model: zelk12/MT-Merge6-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: gemma
pipeline_tag: text-generation
---
# zelk12/MT-Merge6-gemma-2-9B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT-Merge6-gemma-2-9B`](https://huggingface.co/zelk12/MT-Merge6-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT-Merge6-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT-Merge6-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge6-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT-Merge6-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge6-gemma-2-9b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT-Merge6-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge6-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT-Merge6-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge6-gemma-2-9b-q6_k.gguf -c 2048
```
|
doc2txt/layoutlmv2_cord | doc2txt | "2024-02-02T06:15:25Z" | 61 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:cord",
"base_model:microsoft/layoutlmv2-base-uncased",
"base_model:finetune:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-02-01T09:05:05Z" | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
- generated_from_trainer
datasets:
- cord
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv2-finetuned-cord
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord
type: cord
config: cord
split: validation
args: cord
metrics:
- name: Precision
type: precision
value: 0.9652945924132365
- name: Recall
type: recall
value: 0.9676375404530745
- name: F1
type: f1
value: 0.9664646464646465
- name: Accuracy
type: accuracy
value: 0.9702653247941445
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# overfitting issue
I use this colab:
https://colab.research.google.com/drive/1AXh3G3-VmbMWlwbSvesVIurzNlcezTce?usp=sharing
to Fine tuning LayoutLMv2ForTokenClassification on CORD dataset
here is the result:
https://huggingface.co/doc2txt/layoutlmv2-finetuned-cord
* F1: 0.9665
and indeed the result are pretty amazing when running on the test set,
however when running on any other receipt (printed or pdf) the result are completely off
So from some reason the model is overfitting to the cord dataset, even though I use similar images for testing.
I don't think that there is a **Data leakage** unless the cord DS is not clean (which I assume it is clean)
What could be the reason for this?
Is it some inherent property of LayoutLM?
The LayoutLM models are somewhat old, and it seems deserted...
I don't have much experience so I would appreciate any info
Thanks
here is an example code of how to run this model on a specific img folder:
https://huggingface.co/doc2txt/layoutlmv2-finetuned-cord/blob/main/LayoutLMv2Main_cord2_gOcr_folder.py
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the cord dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2819
- Precision: 0.9653
- Recall: 0.9676
- F1: 0.9665
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 400 | 1.2752 | 0.8527 | 0.8382 | 0.8454 | 0.8481 |
| 1.9583 | 2.0 | 800 | 0.6372 | 0.8799 | 0.8948 | 0.8873 | 0.9021 |
| 0.7097 | 3.0 | 1200 | 0.4255 | 0.9241 | 0.9264 | 0.9253 | 0.9414 |
| 0.3845 | 4.0 | 1600 | 0.3021 | 0.9414 | 0.9482 | 0.9448 | 0.9611 |
| 0.2699 | 5.0 | 2000 | 0.2819 | 0.9653 | 0.9676 | 0.9665 | 0.9703 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Firemedic15/Tsxi-V3 | Firemedic15 | "2024-06-04T16:39:58Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-04T16:39:54Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Tsxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.68
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Firemedic15/Tsxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MHGanainy/xmod-roberta-base-legal-multi-ecthr-downstream-ecthr-a | MHGanainy | "2024-08-24T02:08:22Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"xmod",
"text-classification",
"generated_from_trainer",
"base_model:MHGanainy/xmod-roberta-base-legal-multi",
"base_model:finetune:MHGanainy/xmod-roberta-base-legal-multi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-24T01:46:24Z" | ---
library_name: transformers
base_model: MHGanainy/xmod-roberta-base-legal-multi
tags:
- generated_from_trainer
model-index:
- name: xmod-roberta-base-legal-multi-ecthr-downstream-ecthr-a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xmod-roberta-base-legal-multi-ecthr-downstream-ecthr-a
This model is a fine-tuned version of [MHGanainy/xmod-roberta-base-legal-multi](https://huggingface.co/MHGanainy/xmod-roberta-base-legal-multi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2232
- Macro-f1: 0.6306
- Micro-f1: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 282 | 0.1829 | 0.5518 | 0.6772 |
| 0.1547 | 2.0 | 564 | 0.1643 | 0.5833 | 0.6849 |
| 0.1547 | 3.0 | 846 | 0.1821 | 0.6056 | 0.6864 |
| 0.1031 | 4.0 | 1128 | 0.1714 | 0.6432 | 0.7086 |
| 0.1031 | 5.0 | 1410 | 0.1662 | 0.6357 | 0.6958 |
| 0.0836 | 6.0 | 1692 | 0.1835 | 0.6309 | 0.6896 |
| 0.0836 | 7.0 | 1974 | 0.2232 | 0.6306 | 0.6835 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
osanseviero/ppo-LunarLander-v11 | osanseviero | "2022-07-07T09:43:04Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-07-07T09:42:42Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -115.46 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3 | cleanrl | "2023-03-26T00:14:48Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"DoubleDunk-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-26T00:14:46Z" | ---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -5.80 +/- 5.02
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
|
C4SE/distilbert-base-uncased-finetuned-imdb | C4SE | "2024-09-26T08:44:43Z" | 102 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-09-26T07:53:09Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4883
- Model Preparation Time: 0.0017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| No log | 1.0 | 157 | 2.5036 | 0.0017 |
| No log | 2.0 | 314 | 2.4486 | 0.0017 |
| No log | 3.0 | 471 | 2.4807 | 0.0017 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
ElderlyDed/whisper-small-ru | ElderlyDed | "2024-11-12T19:13:06Z" | 79 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-11-12T10:01:03Z" | ---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Ru - Agas
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ru
split: None
args: 'config: ru, split: test'
metrics:
- name: Wer
type: wer
value: 14.951890202207679
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ru - Agas
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1877
- Wer: 14.9519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1712 | 0.4924 | 1000 | 0.2239 | 18.2829 |
| 0.1636 | 0.9847 | 2000 | 0.1986 | 16.0948 |
| 0.0698 | 1.4771 | 3000 | 0.1922 | 15.4403 |
| 0.0715 | 1.9695 | 4000 | 0.1877 | 14.9519 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Sorawiz/MistralCreative-24B-Instruct | Sorawiz | "2025-04-08T15:06:20Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:ReadyArt/Forgotten-Abomination-24B-v4.0",
"base_model:merge:ReadyArt/Forgotten-Abomination-24B-v4.0",
"base_model:ReadyArt/Forgotten-Transgression-24B-v4.1",
"base_model:merge:ReadyArt/Forgotten-Transgression-24B-v4.1",
"base_model:ReadyArt/Gaslight-24B-v1.0",
"base_model:merge:ReadyArt/Gaslight-24B-v1.0",
"base_model:ReadyArt/The-Omega-Directive-M-24B-v1.0",
"base_model:merge:ReadyArt/The-Omega-Directive-M-24B-v1.0",
"base_model:Sorawiz/MistralCreative-24B-Chat",
"base_model:merge:Sorawiz/MistralCreative-24B-Chat",
"base_model:anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF",
"base_model:merge:anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T16:56:05Z" | ---
base_model:
- Sorawiz/MistralCreative-24B-Chat
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- ReadyArt/Forgotten-Abomination-24B-v4.0
- ReadyArt/Forgotten-Transgression-24B-v4.1
- ReadyArt/Gaslight-24B-v1.0
- ReadyArt/The-Omega-Directive-M-24B-v1.0
- anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
library_name: transformers
tags:
- mergekit
- merge
---
# Chat Template
Mistral Instruct
```
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
```
ChatML
```
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}
```
# GGUF
Thank you [mradermacher](https://huggingface.co/mradermacher) for creating the GGUF versions of this model.
* Static quants - [mradermacher/MistralCreative-24B-Instruct-GGUF](https://huggingface.co/mradermacher/MistralCreative-24B-Instruct-GGUF)
* Imatrix quants - [mradermacher/MistralCreative-24B-Instruct-i1-GGUF](https://huggingface.co/mradermacher/MistralCreative-24B-Instruct-i1-GGUF)
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF](https://huggingface.co/anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/The-Omega-Directive-M-24B-v1.0](https://huggingface.co/ReadyArt/The-Omega-Directive-M-24B-v1.0)
* [Sorawiz/MistralCreative-24B-Test-U](https://huggingface.co/Sorawiz/MistralCreative-24B-Test-U)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
name: Sorawiz/MistralCreative-24B-Test-E
merge_method: dare_ties
base_model: Sorawiz/MistralCreative-24B-Chat
models:
- model: Sorawiz/MistralCreative-24B-Chat
parameters:
weight: 0.20
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
parameters:
weight: 0.20
- model: ReadyArt/Forgotten-Transgression-24B-v4.1
parameters:
weight: 0.30
- model: ReadyArt/Forgotten-Abomination-24B-v4.0
parameters:
weight: 0.30
parameters:
density: 1
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/MistralCreative-24B-Test-U
merge_method: dare_ties
base_model: Sorawiz/MistralCreative-24B-Test-E
models:
- model: Sorawiz/MistralCreative-24B-Test-E
parameters:
weight: 0.3
- model: ReadyArt/Gaslight-24B-v1.0
parameters:
weight: 0.5
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
parameters:
weight: 0.2
parameters:
density: 0.70
tokenizer:
source: union
chat_template: auto
---
models:
- model: anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
- model: Sorawiz/MistralCreative-24B-Test-U
parameters:
density: 1.00
weight: 1.00
- model: ReadyArt/The-Omega-Directive-M-24B-v1.0
parameters:
density: 1.00
weight: 1.00
merge_method: ties
base_model: anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
parameters:
normalize: true
dtype: float32
```
|
Mundra/sports-car | Mundra | "2024-04-20T15:11:11Z" | 5 | 1 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"gen-ai",
"ai-image generator",
"rajasthan",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-02-28T14:45:15Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
- gen-ai
- ai-image generator
- rajasthan
---
### Sports-Car Dreambooth model trained by Mundra following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 20CS07
Sample pictures of this concept:
 |
xezpeleta/Mistral-7b-eu | xezpeleta | "2024-01-18T21:48:26Z" | 8 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"eu",
"dataset:xezpeleta/oasst2_eu_threads",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-18T21:23:53Z" | ---
license: apache-2.0
datasets:
- xezpeleta/oasst2_eu_threads
language:
- eu
library_name: transformers
pipeline_tag: text-generation
--- |
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k14_task1_organization | MayBashendy | "2025-01-15T20:27:25Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T20:17:33Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k14_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k14_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Qwk: 0.6
- Mse: 0.9415
- Rmse: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0190 | 2 | 6.9276 | 0.0116 | 6.9276 | 2.6320 |
| No log | 0.0381 | 4 | 4.7656 | 0.0545 | 4.7656 | 2.1830 |
| No log | 0.0571 | 6 | 3.5346 | 0.0212 | 3.5346 | 1.8800 |
| No log | 0.0762 | 8 | 2.4274 | 0.1333 | 2.4274 | 1.5580 |
| No log | 0.0952 | 10 | 1.9106 | 0.2645 | 1.9106 | 1.3822 |
| No log | 0.1143 | 12 | 1.5750 | 0.1905 | 1.5750 | 1.2550 |
| No log | 0.1333 | 14 | 1.4359 | 0.1495 | 1.4359 | 1.1983 |
| No log | 0.1524 | 16 | 1.4312 | 0.2783 | 1.4312 | 1.1963 |
| No log | 0.1714 | 18 | 1.3533 | 0.3009 | 1.3533 | 1.1633 |
| No log | 0.1905 | 20 | 1.5519 | 0.1905 | 1.5519 | 1.2458 |
| No log | 0.2095 | 22 | 1.6654 | 0.1622 | 1.6654 | 1.2905 |
| No log | 0.2286 | 24 | 1.7327 | 0.25 | 1.7327 | 1.3163 |
| No log | 0.2476 | 26 | 1.5649 | 0.2951 | 1.5649 | 1.2510 |
| No log | 0.2667 | 28 | 1.2763 | 0.3717 | 1.2763 | 1.1297 |
| No log | 0.2857 | 30 | 1.7902 | 0.25 | 1.7902 | 1.3380 |
| No log | 0.3048 | 32 | 1.9905 | 0.1875 | 1.9905 | 1.4108 |
| No log | 0.3238 | 34 | 1.3900 | 0.2385 | 1.3900 | 1.1790 |
| No log | 0.3429 | 36 | 1.1253 | 0.4071 | 1.1253 | 1.0608 |
| No log | 0.3619 | 38 | 1.0888 | 0.4870 | 1.0888 | 1.0434 |
| No log | 0.3810 | 40 | 1.5378 | 0.3881 | 1.5378 | 1.2401 |
| No log | 0.4 | 42 | 1.5740 | 0.3768 | 1.5740 | 1.2546 |
| No log | 0.4190 | 44 | 1.2285 | 0.4783 | 1.2285 | 1.1084 |
| No log | 0.4381 | 46 | 0.9748 | 0.6471 | 0.9748 | 0.9873 |
| No log | 0.4571 | 48 | 0.9813 | 0.6212 | 0.9813 | 0.9906 |
| No log | 0.4762 | 50 | 1.0622 | 0.5802 | 1.0622 | 1.0306 |
| No log | 0.4952 | 52 | 1.1423 | 0.5857 | 1.1423 | 1.0688 |
| No log | 0.5143 | 54 | 1.1446 | 0.5373 | 1.1446 | 1.0698 |
| No log | 0.5333 | 56 | 1.1881 | 0.5507 | 1.1881 | 1.0900 |
| No log | 0.5524 | 58 | 1.2013 | 0.5255 | 1.2013 | 1.0960 |
| No log | 0.5714 | 60 | 1.1758 | 0.5507 | 1.1758 | 1.0843 |
| No log | 0.5905 | 62 | 1.1995 | 0.5070 | 1.1995 | 1.0952 |
| No log | 0.6095 | 64 | 1.2154 | 0.4930 | 1.2154 | 1.1025 |
| No log | 0.6286 | 66 | 1.2228 | 0.5143 | 1.2228 | 1.1058 |
| No log | 0.6476 | 68 | 1.2648 | 0.5655 | 1.2648 | 1.1246 |
| No log | 0.6667 | 70 | 1.3367 | 0.5290 | 1.3367 | 1.1562 |
| No log | 0.6857 | 72 | 1.3831 | 0.5125 | 1.3831 | 1.1760 |
| No log | 0.7048 | 74 | 1.3098 | 0.5663 | 1.3098 | 1.1445 |
| No log | 0.7238 | 76 | 1.0619 | 0.6056 | 1.0619 | 1.0305 |
| No log | 0.7429 | 78 | 0.8583 | 0.6269 | 0.8583 | 0.9264 |
| No log | 0.7619 | 80 | 0.8460 | 0.6475 | 0.8460 | 0.9198 |
| No log | 0.7810 | 82 | 0.9401 | 0.5915 | 0.9401 | 0.9696 |
| No log | 0.8 | 84 | 1.0403 | 0.6259 | 1.0403 | 1.0199 |
| No log | 0.8190 | 86 | 1.0513 | 0.64 | 1.0513 | 1.0253 |
| No log | 0.8381 | 88 | 0.9576 | 0.5793 | 0.9576 | 0.9786 |
| No log | 0.8571 | 90 | 0.8899 | 0.6800 | 0.8899 | 0.9433 |
| No log | 0.8762 | 92 | 0.8002 | 0.6383 | 0.8002 | 0.8945 |
| No log | 0.8952 | 94 | 0.7336 | 0.6963 | 0.7336 | 0.8565 |
| No log | 0.9143 | 96 | 0.7889 | 0.6901 | 0.7889 | 0.8882 |
| No log | 0.9333 | 98 | 0.8813 | 0.6887 | 0.8813 | 0.9388 |
| No log | 0.9524 | 100 | 1.0068 | 0.6486 | 1.0068 | 1.0034 |
| No log | 0.9714 | 102 | 1.0193 | 0.6438 | 1.0193 | 1.0096 |
| No log | 0.9905 | 104 | 1.0252 | 0.6259 | 1.0252 | 1.0125 |
| No log | 1.0095 | 106 | 1.0172 | 0.6338 | 1.0172 | 1.0085 |
| No log | 1.0286 | 108 | 0.9630 | 0.6575 | 0.9630 | 0.9813 |
| No log | 1.0476 | 110 | 0.7965 | 0.6806 | 0.7965 | 0.8925 |
| No log | 1.0667 | 112 | 0.7461 | 0.7153 | 0.7461 | 0.8638 |
| No log | 1.0857 | 114 | 0.7243 | 0.7050 | 0.7243 | 0.8511 |
| No log | 1.1048 | 116 | 0.9890 | 0.7 | 0.9890 | 0.9945 |
| No log | 1.1238 | 118 | 1.4175 | 0.5789 | 1.4175 | 1.1906 |
| No log | 1.1429 | 120 | 1.4610 | 0.5612 | 1.4610 | 1.2087 |
| No log | 1.1619 | 122 | 1.1572 | 0.6857 | 1.1572 | 1.0757 |
| No log | 1.1810 | 124 | 0.8256 | 0.7170 | 0.8256 | 0.9086 |
| No log | 1.2 | 126 | 0.7839 | 0.6846 | 0.7839 | 0.8854 |
| No log | 1.2190 | 128 | 0.8247 | 0.6897 | 0.8247 | 0.9081 |
| No log | 1.2381 | 130 | 0.9266 | 0.6483 | 0.9266 | 0.9626 |
| No log | 1.2571 | 132 | 0.9848 | 0.5926 | 0.9848 | 0.9923 |
| No log | 1.2762 | 134 | 1.0753 | 0.5821 | 1.0753 | 1.0370 |
| No log | 1.2952 | 136 | 1.2665 | 0.5655 | 1.2665 | 1.1254 |
| No log | 1.3143 | 138 | 1.1714 | 0.6 | 1.1714 | 1.0823 |
| No log | 1.3333 | 140 | 1.1058 | 0.6531 | 1.1058 | 1.0516 |
| No log | 1.3524 | 142 | 1.0866 | 0.6667 | 1.0866 | 1.0424 |
| No log | 1.3714 | 144 | 1.1161 | 0.6705 | 1.1161 | 1.0565 |
| No log | 1.3905 | 146 | 1.2254 | 0.6448 | 1.2254 | 1.1070 |
| No log | 1.4095 | 148 | 1.1206 | 0.6742 | 1.1206 | 1.0586 |
| No log | 1.4286 | 150 | 0.8646 | 0.7294 | 0.8646 | 0.9298 |
| No log | 1.4476 | 152 | 0.7859 | 0.7619 | 0.7859 | 0.8865 |
| No log | 1.4667 | 154 | 0.8066 | 0.7349 | 0.8066 | 0.8981 |
| No log | 1.4857 | 156 | 0.8386 | 0.7381 | 0.8386 | 0.9157 |
| No log | 1.5048 | 158 | 0.9444 | 0.7143 | 0.9444 | 0.9718 |
| No log | 1.5238 | 160 | 0.9465 | 0.7135 | 0.9465 | 0.9729 |
| No log | 1.5429 | 162 | 1.1787 | 0.7021 | 1.1787 | 1.0857 |
| No log | 1.5619 | 164 | 1.3246 | 0.6492 | 1.3246 | 1.1509 |
| No log | 1.5810 | 166 | 1.0349 | 0.7066 | 1.0349 | 1.0173 |
| No log | 1.6 | 168 | 0.7550 | 0.6933 | 0.7550 | 0.8689 |
| No log | 1.6190 | 170 | 0.6570 | 0.6763 | 0.6570 | 0.8105 |
| No log | 1.6381 | 172 | 0.7324 | 0.7226 | 0.7324 | 0.8558 |
| No log | 1.6571 | 174 | 1.0027 | 0.7143 | 1.0027 | 1.0014 |
| No log | 1.6762 | 176 | 0.9650 | 0.7473 | 0.9650 | 0.9823 |
| No log | 1.6952 | 178 | 0.9670 | 0.7333 | 0.9670 | 0.9834 |
| No log | 1.7143 | 180 | 0.9839 | 0.7086 | 0.9839 | 0.9919 |
| No log | 1.7333 | 182 | 0.8550 | 0.7273 | 0.8550 | 0.9247 |
| No log | 1.7524 | 184 | 0.7549 | 0.7362 | 0.7549 | 0.8689 |
| No log | 1.7714 | 186 | 0.5831 | 0.7815 | 0.5831 | 0.7636 |
| No log | 1.7905 | 188 | 0.5562 | 0.7792 | 0.5562 | 0.7458 |
| No log | 1.8095 | 190 | 0.5702 | 0.7974 | 0.5702 | 0.7551 |
| No log | 1.8286 | 192 | 0.5744 | 0.8176 | 0.5744 | 0.7579 |
| No log | 1.8476 | 194 | 0.7114 | 0.75 | 0.7114 | 0.8435 |
| No log | 1.8667 | 196 | 1.0511 | 0.6509 | 1.0511 | 1.0252 |
| No log | 1.8857 | 198 | 1.3085 | 0.6522 | 1.3085 | 1.1439 |
| No log | 1.9048 | 200 | 1.2143 | 0.6347 | 1.2143 | 1.1020 |
| No log | 1.9238 | 202 | 0.9178 | 0.6575 | 0.9178 | 0.9580 |
| No log | 1.9429 | 204 | 0.7264 | 0.7059 | 0.7264 | 0.8523 |
| No log | 1.9619 | 206 | 0.6319 | 0.7101 | 0.6319 | 0.7949 |
| No log | 1.9810 | 208 | 0.5785 | 0.7552 | 0.5785 | 0.7606 |
| No log | 2.0 | 210 | 0.6666 | 0.7692 | 0.6666 | 0.8165 |
| No log | 2.0190 | 212 | 0.7822 | 0.7590 | 0.7822 | 0.8844 |
| No log | 2.0381 | 214 | 0.9594 | 0.6927 | 0.9594 | 0.9795 |
| No log | 2.0571 | 216 | 1.0295 | 0.6927 | 1.0295 | 1.0146 |
| No log | 2.0762 | 218 | 1.0234 | 0.6936 | 1.0234 | 1.0117 |
| No log | 2.0952 | 220 | 0.8325 | 0.6800 | 0.8325 | 0.9124 |
| No log | 2.1143 | 222 | 0.6633 | 0.7376 | 0.6633 | 0.8144 |
| No log | 2.1333 | 224 | 0.6144 | 0.7518 | 0.6144 | 0.7838 |
| No log | 2.1524 | 226 | 0.5781 | 0.7808 | 0.5781 | 0.7603 |
| No log | 2.1714 | 228 | 0.5720 | 0.75 | 0.5720 | 0.7563 |
| No log | 2.1905 | 230 | 0.7841 | 0.7006 | 0.7841 | 0.8855 |
| No log | 2.2095 | 232 | 1.1698 | 0.6522 | 1.1698 | 1.0816 |
| No log | 2.2286 | 234 | 1.2711 | 0.6528 | 1.2711 | 1.1274 |
| No log | 2.2476 | 236 | 1.1425 | 0.6630 | 1.1425 | 1.0689 |
| No log | 2.2667 | 238 | 0.9720 | 0.6338 | 0.9720 | 0.9859 |
| No log | 2.2857 | 240 | 0.9178 | 0.6383 | 0.9178 | 0.9580 |
| No log | 2.3048 | 242 | 0.8224 | 0.6667 | 0.8224 | 0.9069 |
| No log | 2.3238 | 244 | 0.6724 | 0.7483 | 0.6724 | 0.8200 |
| No log | 2.3429 | 246 | 0.6042 | 0.7815 | 0.6042 | 0.7773 |
| No log | 2.3619 | 248 | 0.5984 | 0.7821 | 0.5984 | 0.7736 |
| No log | 2.3810 | 250 | 0.6357 | 0.7811 | 0.6357 | 0.7973 |
| No log | 2.4 | 252 | 0.8069 | 0.7079 | 0.8069 | 0.8983 |
| No log | 2.4190 | 254 | 1.0167 | 0.6630 | 1.0167 | 1.0083 |
| No log | 2.4381 | 256 | 1.1817 | 0.6067 | 1.1817 | 1.0871 |
| No log | 2.4571 | 258 | 1.1942 | 0.6071 | 1.1942 | 1.0928 |
| No log | 2.4762 | 260 | 1.0612 | 0.6667 | 1.0612 | 1.0302 |
| No log | 2.4952 | 262 | 1.0103 | 0.6418 | 1.0103 | 1.0051 |
| No log | 2.5143 | 264 | 0.9821 | 0.6897 | 0.9821 | 0.9910 |
| No log | 2.5333 | 266 | 1.0528 | 0.625 | 1.0528 | 1.0261 |
| No log | 2.5524 | 268 | 1.1404 | 0.6222 | 1.1404 | 1.0679 |
| No log | 2.5714 | 270 | 1.1478 | 0.6047 | 1.1478 | 1.0713 |
| No log | 2.5905 | 272 | 1.0288 | 0.6503 | 1.0288 | 1.0143 |
| No log | 2.6095 | 274 | 1.0580 | 0.6289 | 1.0580 | 1.0286 |
| No log | 2.6286 | 276 | 0.9598 | 0.7018 | 0.9598 | 0.9797 |
| No log | 2.6476 | 278 | 0.7757 | 0.7561 | 0.7757 | 0.8807 |
| No log | 2.6667 | 280 | 0.6897 | 0.7436 | 0.6897 | 0.8305 |
| No log | 2.6857 | 282 | 0.7379 | 0.7368 | 0.7379 | 0.8590 |
| No log | 2.7048 | 284 | 0.8943 | 0.6755 | 0.8943 | 0.9457 |
| No log | 2.7238 | 286 | 1.0691 | 0.6380 | 1.0691 | 1.0340 |
| No log | 2.7429 | 288 | 1.1026 | 0.6380 | 1.1026 | 1.0500 |
| No log | 2.7619 | 290 | 1.0762 | 0.6380 | 1.0762 | 1.0374 |
| No log | 2.7810 | 292 | 0.9005 | 0.6939 | 0.9005 | 0.9490 |
| No log | 2.8 | 294 | 0.8889 | 0.6883 | 0.8889 | 0.9428 |
| No log | 2.8190 | 296 | 0.8913 | 0.6795 | 0.8913 | 0.9441 |
| No log | 2.8381 | 298 | 0.7952 | 0.7162 | 0.7952 | 0.8917 |
| No log | 2.8571 | 300 | 0.7681 | 0.7333 | 0.7681 | 0.8764 |
| No log | 2.8762 | 302 | 0.8102 | 0.7237 | 0.8102 | 0.9001 |
| No log | 2.8952 | 304 | 0.9584 | 0.6832 | 0.9584 | 0.9790 |
| No log | 2.9143 | 306 | 1.2489 | 0.5989 | 1.2489 | 1.1176 |
| No log | 2.9333 | 308 | 1.4654 | 0.5683 | 1.4654 | 1.2106 |
| No log | 2.9524 | 310 | 1.3460 | 0.5629 | 1.3460 | 1.1602 |
| No log | 2.9714 | 312 | 1.1198 | 0.5946 | 1.1198 | 1.0582 |
| No log | 2.9905 | 314 | 0.9346 | 0.6483 | 0.9346 | 0.9668 |
| No log | 3.0095 | 316 | 0.8010 | 0.6986 | 0.8010 | 0.8950 |
| No log | 3.0286 | 318 | 0.7957 | 0.7075 | 0.7957 | 0.8920 |
| No log | 3.0476 | 320 | 0.8323 | 0.6797 | 0.8323 | 0.9123 |
| No log | 3.0667 | 322 | 0.7511 | 0.7273 | 0.7511 | 0.8667 |
| No log | 3.0857 | 324 | 0.7228 | 0.7532 | 0.7228 | 0.8502 |
| No log | 3.1048 | 326 | 0.6662 | 0.7712 | 0.6662 | 0.8162 |
| No log | 3.1238 | 328 | 0.6598 | 0.7682 | 0.6598 | 0.8123 |
| No log | 3.1429 | 330 | 0.7309 | 0.7273 | 0.7309 | 0.8549 |
| No log | 3.1619 | 332 | 1.0310 | 0.7128 | 1.0310 | 1.0154 |
| No log | 3.1810 | 334 | 1.3323 | 0.6502 | 1.3323 | 1.1542 |
| No log | 3.2 | 336 | 1.3242 | 0.6473 | 1.3242 | 1.1508 |
| No log | 3.2190 | 338 | 1.2287 | 0.6601 | 1.2287 | 1.1085 |
| No log | 3.2381 | 340 | 0.9364 | 0.7429 | 0.9364 | 0.9677 |
| No log | 3.2571 | 342 | 0.7346 | 0.7417 | 0.7346 | 0.8571 |
| No log | 3.2762 | 344 | 0.7309 | 0.7361 | 0.7309 | 0.8549 |
| No log | 3.2952 | 346 | 0.7612 | 0.7413 | 0.7612 | 0.8725 |
| No log | 3.3143 | 348 | 0.8334 | 0.6618 | 0.8334 | 0.9129 |
| No log | 3.3333 | 350 | 0.9475 | 0.625 | 0.9475 | 0.9734 |
| No log | 3.3524 | 352 | 1.1033 | 0.5987 | 1.1033 | 1.0504 |
| No log | 3.3714 | 354 | 1.0992 | 0.6441 | 1.0992 | 1.0484 |
| No log | 3.3905 | 356 | 0.8923 | 0.7195 | 0.8923 | 0.9446 |
| No log | 3.4095 | 358 | 0.7275 | 0.7484 | 0.7275 | 0.8529 |
| No log | 3.4286 | 360 | 0.7031 | 0.7763 | 0.7031 | 0.8385 |
| No log | 3.4476 | 362 | 0.7423 | 0.7403 | 0.7423 | 0.8616 |
| No log | 3.4667 | 364 | 0.9351 | 0.7273 | 0.9351 | 0.9670 |
| No log | 3.4857 | 366 | 1.2541 | 0.6866 | 1.2541 | 1.1199 |
| No log | 3.5048 | 368 | 1.2725 | 0.6601 | 1.2725 | 1.1280 |
| No log | 3.5238 | 370 | 1.0359 | 0.6984 | 1.0359 | 1.0178 |
| No log | 3.5429 | 372 | 0.6981 | 0.7882 | 0.6981 | 0.8355 |
| No log | 3.5619 | 374 | 0.5405 | 0.7755 | 0.5405 | 0.7352 |
| No log | 3.5810 | 376 | 0.5690 | 0.7671 | 0.5690 | 0.7543 |
| No log | 3.6 | 378 | 0.6651 | 0.7568 | 0.6651 | 0.8155 |
| No log | 3.6190 | 380 | 0.8859 | 0.6846 | 0.8859 | 0.9412 |
| No log | 3.6381 | 382 | 1.0890 | 0.6061 | 1.0890 | 1.0435 |
| No log | 3.6571 | 384 | 1.1592 | 0.5904 | 1.1592 | 1.0766 |
| No log | 3.6762 | 386 | 1.0659 | 0.5974 | 1.0659 | 1.0324 |
| No log | 3.6952 | 388 | 0.9105 | 0.6573 | 0.9105 | 0.9542 |
| No log | 3.7143 | 390 | 0.8507 | 0.6857 | 0.8507 | 0.9223 |
| No log | 3.7333 | 392 | 0.8678 | 0.6759 | 0.8678 | 0.9316 |
| No log | 3.7524 | 394 | 0.7877 | 0.7211 | 0.7877 | 0.8875 |
| No log | 3.7714 | 396 | 0.6832 | 0.7568 | 0.6832 | 0.8266 |
| No log | 3.7905 | 398 | 0.6811 | 0.7517 | 0.6811 | 0.8253 |
| No log | 3.8095 | 400 | 0.8109 | 0.7262 | 0.8109 | 0.9005 |
| No log | 3.8286 | 402 | 0.9535 | 0.6893 | 0.9535 | 0.9765 |
| No log | 3.8476 | 404 | 1.0439 | 0.6629 | 1.0439 | 1.0217 |
| No log | 3.8667 | 406 | 0.9609 | 0.6380 | 0.9609 | 0.9802 |
| No log | 3.8857 | 408 | 0.8449 | 0.6806 | 0.8449 | 0.9192 |
| No log | 3.9048 | 410 | 0.8144 | 0.6853 | 0.8144 | 0.9024 |
| No log | 3.9238 | 412 | 0.8255 | 0.6714 | 0.8255 | 0.9086 |
| No log | 3.9429 | 414 | 0.8990 | 0.6528 | 0.8990 | 0.9482 |
| No log | 3.9619 | 416 | 0.9680 | 0.6447 | 0.9680 | 0.9839 |
| No log | 3.9810 | 418 | 0.9759 | 0.6447 | 0.9759 | 0.9879 |
| No log | 4.0 | 420 | 0.9182 | 0.6622 | 0.9182 | 0.9582 |
| No log | 4.0190 | 422 | 0.8673 | 0.6622 | 0.8673 | 0.9313 |
| No log | 4.0381 | 424 | 0.7924 | 0.6667 | 0.7924 | 0.8902 |
| No log | 4.0571 | 426 | 0.7588 | 0.6950 | 0.7588 | 0.8711 |
| No log | 4.0762 | 428 | 0.7577 | 0.6950 | 0.7577 | 0.8704 |
| No log | 4.0952 | 430 | 0.7311 | 0.7083 | 0.7311 | 0.8550 |
| No log | 4.1143 | 432 | 0.7298 | 0.7211 | 0.7298 | 0.8543 |
| No log | 4.1333 | 434 | 0.7760 | 0.6986 | 0.7760 | 0.8809 |
| No log | 4.1524 | 436 | 0.8683 | 0.6803 | 0.8683 | 0.9318 |
| No log | 4.1714 | 438 | 0.9499 | 0.6531 | 0.9499 | 0.9746 |
| No log | 4.1905 | 440 | 1.0331 | 0.6410 | 1.0331 | 1.0164 |
| No log | 4.2095 | 442 | 0.9723 | 0.6667 | 0.9723 | 0.9860 |
| No log | 4.2286 | 444 | 0.9087 | 0.6892 | 0.9087 | 0.9533 |
| No log | 4.2476 | 446 | 0.7878 | 0.7 | 0.7878 | 0.8876 |
| No log | 4.2667 | 448 | 0.7416 | 0.7429 | 0.7416 | 0.8612 |
| No log | 4.2857 | 450 | 0.7007 | 0.7429 | 0.7007 | 0.8371 |
| No log | 4.3048 | 452 | 0.7716 | 0.7297 | 0.7716 | 0.8784 |
| No log | 4.3238 | 454 | 0.8073 | 0.7226 | 0.8073 | 0.8985 |
| No log | 4.3429 | 456 | 0.8476 | 0.7362 | 0.8476 | 0.9207 |
| No log | 4.3619 | 458 | 0.8605 | 0.7337 | 0.8605 | 0.9276 |
| No log | 4.3810 | 460 | 0.7855 | 0.7394 | 0.7855 | 0.8863 |
| No log | 4.4 | 462 | 0.7125 | 0.7517 | 0.7125 | 0.8441 |
| No log | 4.4190 | 464 | 0.6522 | 0.7550 | 0.6522 | 0.8076 |
| No log | 4.4381 | 466 | 0.6744 | 0.7517 | 0.6744 | 0.8212 |
| No log | 4.4571 | 468 | 0.7751 | 0.7296 | 0.7751 | 0.8804 |
| No log | 4.4762 | 470 | 0.7674 | 0.7407 | 0.7674 | 0.8760 |
| No log | 4.4952 | 472 | 0.7191 | 0.7673 | 0.7191 | 0.8480 |
| No log | 4.5143 | 474 | 0.6187 | 0.7755 | 0.6187 | 0.7866 |
| No log | 4.5333 | 476 | 0.5954 | 0.7703 | 0.5954 | 0.7716 |
| No log | 4.5524 | 478 | 0.6129 | 0.7843 | 0.6129 | 0.7829 |
| No log | 4.5714 | 480 | 0.7035 | 0.775 | 0.7035 | 0.8387 |
| No log | 4.5905 | 482 | 0.7294 | 0.7547 | 0.7294 | 0.8540 |
| No log | 4.6095 | 484 | 0.7819 | 0.7375 | 0.7819 | 0.8842 |
| No log | 4.6286 | 486 | 0.8245 | 0.7134 | 0.8245 | 0.9080 |
| No log | 4.6476 | 488 | 0.8018 | 0.7179 | 0.8018 | 0.8954 |
| No log | 4.6667 | 490 | 0.7614 | 0.7261 | 0.7614 | 0.8726 |
| No log | 4.6857 | 492 | 0.7688 | 0.7134 | 0.7688 | 0.8768 |
| No log | 4.7048 | 494 | 0.7713 | 0.6909 | 0.7713 | 0.8782 |
| No log | 4.7238 | 496 | 0.6533 | 0.7792 | 0.6533 | 0.8082 |
| No log | 4.7429 | 498 | 0.6313 | 0.7792 | 0.6313 | 0.7945 |
| 0.4155 | 4.7619 | 500 | 0.7728 | 0.6623 | 0.7728 | 0.8791 |
| 0.4155 | 4.7810 | 502 | 0.9621 | 0.6497 | 0.9621 | 0.9809 |
| 0.4155 | 4.8 | 504 | 1.0962 | 0.6026 | 1.0962 | 1.0470 |
| 0.4155 | 4.8190 | 506 | 1.0686 | 0.5833 | 1.0686 | 1.0337 |
| 0.4155 | 4.8381 | 508 | 1.0001 | 0.6014 | 1.0001 | 1.0001 |
| 0.4155 | 4.8571 | 510 | 0.9415 | 0.6 | 0.9415 | 0.9703 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
magnustragardh/bert-finetuned-ner | magnustragardh | "2023-08-14T18:35:53Z" | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-08-14T17:53:02Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9341604631927213
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9421873696504547
- name: Accuracy
type: accuracy
value: 0.9865485371166186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0595
- Precision: 0.9342
- Recall: 0.9504
- F1: 0.9422
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0793 | 1.0 | 1756 | 0.0771 | 0.9122 | 0.9335 | 0.9227 | 0.9804 |
| 0.0412 | 2.0 | 3512 | 0.0606 | 0.9244 | 0.9448 | 0.9345 | 0.9855 |
| 0.0259 | 3.0 | 5268 | 0.0595 | 0.9342 | 0.9504 | 0.9422 | 0.9865 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF | tensorblock | "2025-01-09T15:06:09Z" | 221 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:oopsung/Yi-Ko-6B-Exo-test-v1",
"base_model:quantized:oopsung/Yi-Ko-6B-Exo-test-v1",
"endpoints_compatible",
"region:us"
] | null | "2025-01-09T14:38:12Z" | ---
base_model: oopsung/Yi-Ko-6B-Exo-test-v1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## oopsung/Yi-Ko-6B-Exo-test-v1 - GGUF
This repo contains GGUF format model files for [oopsung/Yi-Ko-6B-Exo-test-v1](https://huggingface.co/oopsung/Yi-Ko-6B-Exo-test-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-Ko-6B-Exo-test-v1-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q2_K.gguf) | Q2_K | 2.405 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-Ko-6B-Exo-test-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q3_K_S.gguf) | Q3_K_S | 2.784 GB | very small, high quality loss |
| [Yi-Ko-6B-Exo-test-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q3_K_M.gguf) | Q3_K_M | 3.067 GB | very small, high quality loss |
| [Yi-Ko-6B-Exo-test-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q3_K_L.gguf) | Q3_K_L | 3.311 GB | small, substantial quality loss |
| [Yi-Ko-6B-Exo-test-v1-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q4_0.gguf) | Q4_0 | 3.562 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-Ko-6B-Exo-test-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q4_K_S.gguf) | Q4_K_S | 3.585 GB | small, greater quality loss |
| [Yi-Ko-6B-Exo-test-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q4_K_M.gguf) | Q4_K_M | 3.756 GB | medium, balanced quality - recommended |
| [Yi-Ko-6B-Exo-test-v1-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q5_0.gguf) | Q5_0 | 4.294 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-Ko-6B-Exo-test-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q5_K_S.gguf) | Q5_K_S | 4.294 GB | large, low quality loss - recommended |
| [Yi-Ko-6B-Exo-test-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q5_K_M.gguf) | Q5_K_M | 4.394 GB | large, very low quality loss - recommended |
| [Yi-Ko-6B-Exo-test-v1-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q6_K.gguf) | Q6_K | 5.072 GB | very large, extremely low quality loss |
| [Yi-Ko-6B-Exo-test-v1-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF/blob/main/Yi-Ko-6B-Exo-test-v1-Q8_0.gguf) | Q8_0 | 6.568 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF --include "Yi-Ko-6B-Exo-test-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-Ko-6B-Exo-test-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
vipuldeore/youngboy | vipuldeore | "2025-03-18T10:59:43Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-18T10:59:32Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0A\0 \0c\0i\0n\0e\0m\0a\0t\0i\0c\0 \0p\0o\0r\0t\0r\0a\0i\0t\0 \0o\0f\0 \0a\0 \0y\0o\0u\0n\0g\0 \0m\0a\0n\0 \0w\0i\0t\0h\0 \0s\0h\0o\0r\0t\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0,\0 \0n\0e\0a\0t\0l\0y\0 \0c\0o\0m\0b\0e\0d\0 \0,\0s\0l\0i\0c\0k\0 \0b\0a\0c\0k\0b\0r\0o\0w\0n\0 \0h\0a\0i\0r\0 \0A\0n\0 \0e\0l\0e\0g\0a\0n\0t\0 \0i\0n\0d\0i\0v\0i\0d\0u\0a\0l\0 \0s\0e\0a\0t\0e\0d\0 \0a\0t\0 \0a\0 \0s\0l\0e\0e\0k\0 \0c\0o\0n\0f\0e\0r\0e\0n\0c\0e\0 \0t\0a\0b\0l\0e\0,\0 \0w\0e\0a\0r\0i\0n\0g\0 \0a\0 \0f\0i\0t\0t\0e\0d\0 \0b\0l\0a\0z\0e\0r\0,\0 \0w\0i\0t\0h\0 \0a\0 \0n\0o\0t\0e\0p\0a\0d\0 \0a\0n\0d\0 \0l\0a\0p\0t\0o\0p\0 \0i\0n\0 \0f\0r\0o\0n\0t\0,\0 \0e\0n\0g\0a\0g\0e\0d\0 \0i\0n\0 \0a\0 \0d\0i\0s\0c\0u\0s\0s\0i\0o\0n\0 \0w\0i\0t\0h\0 \0c\0o\0l\0l\0e\0a\0g\0u\0e\0s\0,\0 \0s\0h\0o\0w\0c\0a\0s\0i\0n\0g\0 \0l\0e\0a\0d\0e\0r\0s\0h\0i\0p\0 \0a\0n\0d\0 \0p\0r\0o\0f\0e\0s\0s\0i\0o\0n\0a\0l\0i\0s\0m\0.\0"
output:
url: images/TestV2_e000004_00_20241001193122.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: young man
---
# young boy
<Gallery />
## Trigger words
You should use `young man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/vipuldeore/youngboy/tree/main) them in the Files & versions tab.
|
lesso09/c29cf3b2-08ca-4aff-9098-d24e889800f0 | lesso09 | "2025-02-23T09:23:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | "2025-02-23T08:58:51Z" | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c29cf3b2-08ca-4aff-9098-d24e889800f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3061c3759eac5b59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3061c3759eac5b59_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso09/c29cf3b2-08ca-4aff-9098-d24e889800f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000209
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/3061c3759eac5b59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 90
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4de58495-d820-4b30-a531-4ff9446313dc
wandb_project: 09a
wandb_run: your_name
wandb_runid: 4de58495-d820-4b30-a531-4ff9446313dc
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c29cf3b2-08ca-4aff-9098-d24e889800f0
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000209
- train_batch_size: 4
- eval_batch_size: 4
- seed: 90
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.2637 |
| 0.3057 | 0.0085 | 50 | 0.1829 |
| 0.3836 | 0.0169 | 100 | 0.1999 |
| 0.3846 | 0.0254 | 150 | 0.1997 |
| 0.3616 | 0.0339 | 200 | 0.1918 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nikitastheo/BERTtime-Stories-10m-nucleus-1-balanced | nikitastheo | "2024-09-17T07:58:40Z" | 4,254 | 0 | transformers | [
"transformers",
"pytorch",
"ltgbert",
"fill-mask",
"custom_code",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"region:us"
] | fill-mask | "2024-09-17T05:52:51Z" | ---
license: gpl-3.0
language:
- en
library_name: transformers
---
This model uses the LTG-BERT architecture.
The model was trained on a combination of the BabyLM Dataset, the TinyStories Dataset, and generated data,
in accordance with the rules of the Stric-Small track, and the 10M word budget.
The model was trained with 128 token sequence length
Hyperparameters used and evaluation scores will follow in a subsequent update.
|
wisenut-nlp-team/t5-fid-new | wisenut-nlp-team | "2024-05-10T05:17:10Z" | 27 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2023-12-22T05:16:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liaad/srl-en_xlmr-base | liaad | "2021-09-22T08:56:11Z" | 9 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"xlm-roberta-base",
"semantic role labeling",
"finetuned",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R base fine-tuned on English semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-en_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The models were trained only for 5 epochs.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lesso14/5513e86f-454f-4f4a-a778-ad2f1f5e7330 | lesso14 | "2025-04-15T09:10:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-15T08:42:08Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mergekit-community/MN-Hekate-Nyktipolos-17B | mergekit-community | "2025-04-04T11:56:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407",
"base_model:merge:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407",
"base_model:Nitral-AI/Captain-Eris_Violet-GRPO-v0.420",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-GRPO-v0.420",
"base_model:mergekit-community/MN-Hekate-Daidalos-17B",
"base_model:merge:mergekit-community/MN-Hekate-Daidalos-17B",
"base_model:mergekit-community/MN-Sappho-g3-12B",
"base_model:merge:mergekit-community/MN-Sappho-g3-12B",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:merge:mistralai/Mistral-Nemo-Base-2407",
"base_model:nbeerbower/Lyra-Gutenberg-mistral-nemo-12B",
"base_model:merge:nbeerbower/Lyra-Gutenberg-mistral-nemo-12B",
"base_model:nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B",
"base_model:merge:nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B",
"base_model:nbeerbower/mistral-nemo-bophades-12B",
"base_model:merge:nbeerbower/mistral-nemo-bophades-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-04T11:45:51Z" | ---
base_model:
- nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
- mistralai/Mistral-Nemo-Base-2407
- nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
- nbeerbower/mistral-nemo-bophades-12B
- HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
- mergekit-community/MN-Sappho-g3-12B
- Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
- mergekit-community/MN-Hekate-Daidalos-17B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/MN-Hekate-Daidalos-17B](https://huggingface.co/mergekit-community/MN-Hekate-Daidalos-17B) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B)
* [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)
* [nbeerbower/Lyra-Gutenberg-mistral-nemo-12B](https://huggingface.co/nbeerbower/Lyra-Gutenberg-mistral-nemo-12B)
* [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B)
* [HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407](https://huggingface.co/HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407)
* [mergekit-community/MN-Sappho-g3-12B](https://huggingface.co/mergekit-community/MN-Sappho-g3-12B)
* [Nitral-AI/Captain-Eris_Violet-GRPO-v0.420](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-GRPO-v0.420)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: mergekit-community/MN-Hekate-Daidalos-17B
slices:
- sources:
- model: mergekit-community/MN-Hekate-Daidalos-17B
layer_range: [0, 20]
- sources:
- model: mergekit-community/MN-Hekate-Daidalos-17B
layer_range: [20, 36]
parameters:
weight: 3
- model: mistralai/Mistral-Nemo-Base-2407
layer_range: [16, 32]
parameters:
weight: 2
- model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
layer_range: [16, 32]
parameters:
weight: [1.5, 1.49, 1.46, 1.4, 1.33, 1.25, 1.15, 1.05, 1]
- model: nbeerbower/mistral-nemo-bophades-12B
layer_range: [16, 32]
parameters:
weight: [1.5, 1.49, 1.46, 1.4, 1.33, 1.25, 1.15, 1.05, 1]
- model: mergekit-community/MN-Sappho-g3-12B
layer_range: [16, 32]
parameters:
weight: [1.0, 1.1, 1.2, 1.29, 1.37, 1.43, 1.48, 1.5, 1.5]
- model: nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
layer_range: [16, 32]
parameters:
weight: [1.0, 1.1, 1.2, 1.29, 1.37, 1.43, 1.48, 1.5, 1.5]
- sources:
- model: mergekit-community/MN-Hekate-Daidalos-17B
layer_range: [36, 56]
parameters:
weight: 3
- model: nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
layer_range: [20, 40]
parameters:
weight: [1, 0.75]
- model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
layer_range: [20, 40]
parameters:
weight: [1, 0.5]
- model: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
layer_range: [20, 40]
parameters:
weight: [1, 0.75]
tokenizer:
source: union
tokens:
"[INST]":
source: mergekit-community/MN-Hekate-Daidalos-17B
force: true
"[/INST]":
source: mergekit-community/MN-Hekate-Daidalos-17B
force: true
"<|im_start|>":
source: mergekit-community/MN-Hekate-Daidalos-17B
force: true
"<|im_end|>":
source: mergekit-community/MN-Hekate-Daidalos-17B
force: true
```
|
joshualxndrs/Convertalk | joshualxndrs | "2025-03-19T03:23:21Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:adapter:deepseek-ai/deepseek-llm-7b-chat",
"region:us"
] | null | "2025-03-19T02:21:12Z" | ---
base_model: deepseek-ai/deepseek-llm-7b-chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
khilan-crest/twitter-roberta-base-sentiment-latest_28122024T091935 | khilan-crest | "2024-12-28T03:55:35Z" | 117 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-28T03:54:44Z" | ---
library_name: transformers
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twitter-roberta-base-sentiment-latest_28122024T091935
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-latest_28122024T091935
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8677
- F1: 0.6678
- Learning Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Rate |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| No log | 0.9984 | 320 | 0.9190 | 0.6259 | 0.0000 |
| 1.1219 | 2.0 | 641 | 0.8677 | 0.6678 | 0.0000 |
| 1.1219 | 2.9984 | 961 | 0.8741 | 0.6850 | 0.0000 |
| 0.6722 | 4.0 | 1282 | 0.9274 | 0.6853 | 0.0000 |
| 0.4361 | 4.9922 | 1600 | 0.9376 | 0.6851 | 0.0 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.20.3
|
mradermacher/Barcenas-3b-GRPO-GGUF | mradermacher | "2025-02-10T05:12:42Z" | 264 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:openai/gsm8k",
"base_model:Danielbrdz/Barcenas-3b-GRPO",
"base_model:quantized:Danielbrdz/Barcenas-3b-GRPO",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-09T01:53:19Z" | ---
base_model: Danielbrdz/Barcenas-3b-GRPO
datasets:
- openai/gsm8k
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Danielbrdz/Barcenas-3b-GRPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Barcenas-3b-GRPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-3b-GRPO-GGUF/resolve/main/Barcenas-3b-GRPO.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
maywell/PiVoT-10.7B-Mistral-v0.2-RP | maywell | "2023-12-16T12:09:19Z" | 1,418 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-16T11:21:46Z" | ---
license: cc-by-sa-4.0
language:
- en
pipeline_tag: text-generation
---
# PiVoT-10.7B-Mistral-v0.2-RP

# **Model Details**
### Description
PivoT is RP Finetuned model based on PiVoT-10.7B-Mistral-v0.2. Private RP dataset collected from ArcaLive Ai Chat Chan used.
This model has sequence length(Sliding Window) of **8192**
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift 💕
Contact me on Telegram: https://t.me/AlzarTakkarsen |
dcferreira/detoxify-optimized | dcferreira | "2023-05-19T17:35:13Z" | 35 | 2 | transformers | [
"transformers",
"onnx",
"xlm-roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-19T11:27:06Z" | ---
license: apache-2.0
---
This repo has an optimized version of [Detoxify](https://github.com/unitaryai/detoxify/), which needs less disk space and less memory at the cost of just a little bit of accuracy.
This is an experiment for me to learn how to use [🤗 Optimum](https://huggingface.co/docs/optimum/index).
# Usage
Loading the model requires the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library installed.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline as opt_pipeline
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dcferreira/detoxify-optimized")
model = ORTModelForSequenceClassification.from_pretrained("dcferreira/detoxify-optimized")
pipe = opt_pipeline(
model=model,
task="text-classification",
function_to_apply="sigmoid",
accelerator="ort",
tokenizer=tokenizer,
top_k=None, # return scores for all the labels, model was trained as multilabel
)
print(pipe(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста']))
```
# Performance
The table below compares some statistics on running the original model, vs the original model with the [onnxruntime](https://onnxruntime.ai/), vs optimizing the model with onnxruntime.
| model | Accuracy (%) | Samples p/ second (CPU) | Samples p/ second (GPU) | GPU VRAM | Disk Space |
|----------------|----------|-------------------------|-------------------------|----------|------------|
| original | 92.1083 | 16 | 250 | 3GB | 1.1GB |
| ort | 92.1067 | 19 | 340 | 4GB | 1.1GB |
| optimized (O4) | 92.1031 | 14 | 650 | 2GB | 540MB |
For details on how these numbers were reached, check out `evaluate.py` in this repo.
|
legraphista/Phi-3-mini-4k-instruct-update2024_07_03-IMat-GGUF | legraphista | "2024-07-03T09:28:13Z" | 277 | 0 | gguf | [
"gguf",
"nlp",
"code",
"quantized",
"GGUF",
"quantization",
"imat",
"imatrix",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us",
"conversational"
] | text-generation | "2024-07-03T09:12:41Z" | ---
base_model: microsoft/Phi-3-mini-4k-instruct
inference: false
language:
- en
library_name: gguf
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- nlp
- code
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Phi-3-mini-4k-instruct-IMat-GGUF
_Llama.cpp imatrix quantization of microsoft/Phi-3-mini-4k-instruct_
Original Model: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3286](https://github.com/ggerganov/llama.cpp/releases/tag/b3286)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Phi-3-mini-4k-instruct.Q8_0.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q8_0.gguf) | Q8_0 | 4.06GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q6_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q6_K.gguf) | Q6_K | 3.14GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q4_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q4_K.gguf) | Q4_K | 2.39GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q3_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q3_K.gguf) | Q3_K | 1.96GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q2_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q2_K.gguf) | Q2_K | 1.42GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Phi-3-mini-4k-instruct.BF16.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.BF16.gguf) | BF16 | 7.64GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.FP16.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.FP16.gguf) | F16 | 7.64GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q8_0.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q8_0.gguf) | Q8_0 | 4.06GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q6_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q6_K.gguf) | Q6_K | 3.14GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q5_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q5_K.gguf) | Q5_K | 2.82GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q5_K_S.gguf) | Q5_K_S | 2.64GB | ✅ Available | ⚪ Static | 📦 No
| [Phi-3-mini-4k-instruct.Q4_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q4_K.gguf) | Q4_K | 2.39GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q4_K_S.gguf) | Q4_K_S | 2.19GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ4_NL.gguf) | IQ4_NL | 2.18GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ4_XS.gguf) | IQ4_XS | 2.06GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q3_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q3_K.gguf) | Q3_K | 1.96GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q3_K_L.gguf) | Q3_K_L | 2.09GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q3_K_S.gguf) | Q3_K_S | 1.68GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ3_M.gguf) | IQ3_M | 1.86GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ3_S.gguf) | IQ3_S | 1.68GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ3_XS.gguf) | IQ3_XS | 1.63GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q2_K.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q2_K.gguf) | Q2_K | 1.42GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.Q2_K_S.gguf) | Q2_K_S | 1.33GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ2_M.gguf) | IQ2_M | 1.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ2_S.gguf) | IQ2_S | 1.22GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ2_XS.gguf) | IQ2_XS | 1.15GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ2_XXS.gguf) | IQ2_XXS | 1.04GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ1_M.gguf) | IQ1_M | 917.11MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Phi-3-mini-4k-instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Phi-3.1-mini-4k-instruct/blob/main/Phi-3-mini-4k-instruct.IQ1_S.gguf) | IQ1_S | 841.61MB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Phi-3.1-mini-4k-instruct --include "Phi-3-mini-4k-instruct.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Phi-3.1-mini-4k-instruct --include "Phi-3-mini-4k-instruct.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|user|>
{user_prompt}<|end|>
<|assistant|>
{assistant_response}<|end|>
<|user|>
{next_user_prompt}<|end|>
<|endoftext|>
```
### Chat template with system prompt
```
<|system|>
{system_prompt}<|end|>
<|user|>
{user_prompt}<|end|>
<|assistant|>
{assistant_response}<|end|>
<|user|>
{next_user_prompt}<|end|>
<|endoftext|>
```
### Llama.cpp
```
llama.cpp/main -m Phi-3-mini-4k-instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Phi-3-mini-4k-instruct.Q8_0`)
3. Run `gguf-split --merge Phi-3-mini-4k-instruct.Q8_0/Phi-3-mini-4k-instruct.Q8_0-00001-of-XXXXX.gguf Phi-3-mini-4k-instruct.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
gsarti/it5-efficient-small-el32-wiki-summarization | gsarti | "2022-10-12T13:03:53Z" | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"wikipedia",
"summarization",
"efficient",
"wits",
"it",
"dataset:wits",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | "2022-04-28T14:13:19Z" | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- efficient
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.346
name: "Test Rouge1"
- type: rouge2
value: 0.196
name: "Test Rouge2"
- type: rougeL
value: 0.314
name: "Test RougeL"
- type: bertscore
value: 0.513
name: "Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Wikipedia Summarization 📑 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-wiki-summarization')
hg("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo.")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
tuphamdf/skincare-detection | tuphamdf | "2024-03-06T13:29:06Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"vision",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-01T10:24:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- image-classification
- vision
base_model: google/vit-base-patch16-224-in21k
metrics:
- accuracy
model-index:
- name: skincare-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# skincare-detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4840
- Accuracy: 0.8648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3961 | 0.99 | 61 | 0.5629 | 0.7725 |
| 0.4982 | 2.0 | 123 | 0.3991 | 0.8435 |
| 0.3563 | 2.99 | 184 | 0.4330 | 0.8272 |
| 0.2314 | 4.0 | 246 | 0.3969 | 0.8554 |
| 0.1815 | 4.99 | 307 | 0.4492 | 0.8435 |
| 0.1332 | 6.0 | 369 | 0.4474 | 0.8580 |
| 0.0869 | 6.99 | 430 | 0.4520 | 0.8631 |
| 0.0844 | 8.0 | 492 | 0.4469 | 0.8640 |
| 0.0681 | 8.99 | 553 | 0.4533 | 0.8717 |
| 0.0574 | 10.0 | 615 | 0.4952 | 0.8597 |
| 0.0477 | 10.99 | 676 | 0.4772 | 0.8674 |
| 0.0454 | 11.9 | 732 | 0.4840 | 0.8648 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Alireza1044/mobilebert_sst2 | Alireza1044 | "2022-06-15T11:12:07Z" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-15T05:16:58Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9036697247706422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1730
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Faitlesses/11111 | Faitlesses | "2025-03-21T14:18:36Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-21T14:18:33Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DIKI
---
# 11111
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DIKI` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Faitlesses/11111', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
speechbrain/sepformer-wham-enhancement | speechbrain | "2024-02-25T16:59:46Z" | 729 | 21 | speechbrain | [
"speechbrain",
"audio-to-audio",
"Speech Enhancement",
"WHAM!",
"SepFormer",
"Transformer",
"pytorch",
"en",
"arxiv:2010.13154",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | audio-to-audio | "2022-03-02T23:29:05Z" | ---
language: "en"
thumbnail:
tags:
- audio-to-audio
- Speech Enhancement
- WHAM!
- SepFormer
- Transformer
- pytorch
- speechbrain
license: "apache-2.0"
metrics:
- SI-SNR
- PESQ
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on WHAM! for speech enhancement (8k sampling frequency)
This repository provides all the necessary tools to perform speech enhancement (denoising) with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAM!](http://wham.whisper.ai/) dataset with 8k sampling frequency, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation in 8k. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance is 14.35 dB SI-SNR on the test set of WHAM! dataset.
| Release | Test-Set SI-SNR | Test-Set PESQ |
|:-------------:|:--------------:|:--------------:|
| 01-12-21 | 14.35 | 3.07 |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
### Perform speech enhancement on your own audio file
```python
from speechbrain.inference.separation import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-wham-enhancement", savedir='pretrained_models/sepformer-wham-enhancement')
# for custom file, change path
est_sources = model.separate_file(path='speechbrain/sepformer-wham-enhancement/example_wham.wav')
torchaudio.save("enhanced_wham.wav", est_sources[:, :, 0].detach().cpu(), 8000)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The training script is currently being worked on an ongoing pull-request.
We will update the model card as soon as the PR is merged.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1bbQvaiN-R79M697NnekA7Rr0jIYtO6e3).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
@article{subakan2023exploring,
author={Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, François and Bronzi, Mirko},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={Exploring Self-Attention Mechanisms for Speech Separation},
year={2023},
volume={31},
pages={2169-2180},
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ |
selvino/Taxi-v3 | selvino | "2023-01-15T20:00:19Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-15T19:56:35Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="selvino/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pashash0k/reward_model | pashash0k | "2025-03-28T21:44:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"dataset:HumanLLMs/Human-Like-DPO-Dataset",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-28T21:39:34Z" | ---
base_model: HuggingFaceTB/SmolLM-135M-Instruct
datasets: HumanLLMs/Human-Like-DPO-Dataset
library_name: transformers
model_name: reward_model
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for reward_model
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) on the [HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pashash0k/reward_model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.3.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gerbejon/digilog-eform-classifier-url-and-text | gerbejon | "2025-04-04T07:34:48Z" | 0 | 0 | null | [
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"region:us"
] | null | "2025-04-04T06:50:17Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
annisamukhri/indosbert-climate-faq | annisamukhri | "2025-03-23T15:05:04Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6461",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:denaya/indoSBERT-large",
"base_model:finetune:denaya/indoSBERT-large",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-23T14:58:53Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6461
- loss:MultipleNegativesRankingLoss
base_model: denaya/indoSBERT-large
widget:
- source_sentence: pembangkitan listrik global berasal sumber rendah karbon 2020
sentences:
- layar super retina xdr oled iphone 12 pro memiliki resolusi 2532 1170 piksel bezel
tipis dibanding iphone generasi lapisan kacakeramik bernama ceramic shield dikembangkan
corning inc apple mengklaim ceramic shield memiliki kinerja jatuh 4 kali kuat
dibanding kaca ponsel pintar
- ruu perubahan iklim 2008 dikenal ruu perubahan iklim
- 40 pembangkitan listrik berasal sumber rendah karbon 2020 10 tenaga nuklir 10
tenaga angin matahari 20 tenaga air energi terbarukan
- source_sentence: ringkasan pembuat kebijakan spm srccl
sentences:
- spm srccl masyarakat lahan iklim dunia memanas pilihan respons adaptasi mitigasi
pilihan respons mendukung tindakan jangka pendek
- sistem pengamatan iklim global gcos didirikan 1992 pengamatan informasi dibutuhkan
mengatasi terkait iklim diperoleh tersedia pengguna potensial
- musim dingin hangat memperpendek musim kegiatan rekreasi memancing es mengendarai
mobil salju bermain ski berseluncur salju merugikan ekonomi lokal bergantung danaudanau
membeku lambat mencair seabad memperpendek musim memancing es berseluncur es 1970an
lapisan es musim dingin great lakes menurun 63 persen suhu hangat memperpendek
musim tanah tertutup salju memperpendek musim kegiatan salju hujan salju tahunan
meningkat wilayah great lakes menguntungkan rekreasi musim dingin lokasi musim
dingin hangat kurangnya salju membahayakan kereta luncur anjing john beargrease
dog sled race dialihkan dipersingkat 70 mil kurangnya salju
- source_sentence: sajakah akibat ekonomi penebangan liar
sentences:
- penebangan liar membatasi kapasitas negaranegara produsen berinvestasi pembangunan
berkelanjutan mengurangi pendapatan mendorong korupsi berdampak pasar internasional
persaingan sehat
- kotamadya vila nova de famalicão dibentuk tanggal 21 maret 1835
- kritik revolusi hijau mencakup ketidakberlanjutannya prinsip populasi malthus
peralihan lahan pertanian berorientasi pemenuhan kebutuhan hidup lahan pertanian
berorientasi produksi bijibijian ekspor pakan ternak revolusi hijau dituduh mengurangi
ketahanan pangan orang berkontribusi lingkungan polusi air degradasi tanah emisi
gas rumah kaca
- source_sentence: dampak pandemi covid19 konsumsi energi
sentences:
- amerika serikat pembuangan sampah diatur badan perlindungan lingkungan epa badan
lingkungan negara
- pernyataan internasional hak asasi manusia perubahan iklim menekankan potensi
dampak buruk perubahan iklim hak hidup makanan air kesehatan perumahan pembangunan
penentuan nasib
- akibat pandemi covid19 penurunan signifikan penggunaan energi dunia 2020 total
permintaan energi dunia pulih 2021 mencapai rekor tertinggi 2022
- source_sentence: 1 usul beton
sentences:
- biro lingkungan eropa lsm lingkungan didirikan 1974 mewakili kelompok lingkungan
negara anggota lembaga menyediakan akses lembagalembaga eropa komisi eropa berpartisipasi
pembuatan kebijakan membentuk komite konsultatif badanbadan
- concrete berasal latin concretus bentuk pasif sempurna concrescere concrescere
berasal con crescere tumbuh
- pencapaian utama earth summit 1992 meliputi pembentukan unfccc kesepakatan konvensi
perubahan iklim kesepakatan aktivitas tanah masyarakat adat menyebabkan degradasi
lingkungan sesuai budaya konvensi keanekaragaman hayati dibuka ditandatangani
deklarasi rio lingkungan pembangunan agenda 21 prinsipprinsip kehutanan disetujui
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on denaya/indoSBERT-large
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: indoSBERT large eval
type: indoSBERT-large-eval
metrics:
- type: cosine_accuracy@1
value: 0.8359826589595376
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.921242774566474
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.940028901734104
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9624277456647399
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8359826589595376
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3070809248554913
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18800578034682075
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09624277456647395
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8359826589595376
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.921242774566474
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.940028901734104
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9624277456647399
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9017660442398726
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8821233599412789
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8832348876185122
name: Cosine Map@100
---
# SentenceTransformer based on denaya/indoSBERT-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [denaya/indoSBERT-large](https://huggingface.co/denaya/indoSBERT-large). It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [denaya/indoSBERT-large](https://huggingface.co/denaya/indoSBERT-large) <!-- at revision 5c64d43f07f7054dfbf33d226b3066414b6ebc4a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 256 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'1 usul beton',
'concrete berasal latin concretus bentuk pasif sempurna concrescere concrescere berasal con crescere tumbuh',
'pencapaian utama earth summit 1992 meliputi pembentukan unfccc kesepakatan konvensi perubahan iklim kesepakatan aktivitas tanah masyarakat adat menyebabkan degradasi lingkungan sesuai budaya konvensi keanekaragaman hayati dibuka ditandatangani deklarasi rio lingkungan pembangunan agenda 21 prinsipprinsip kehutanan disetujui',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `indoSBERT-large-eval`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.836 |
| cosine_accuracy@3 | 0.9212 |
| cosine_accuracy@5 | 0.94 |
| cosine_accuracy@10 | 0.9624 |
| cosine_precision@1 | 0.836 |
| cosine_precision@3 | 0.3071 |
| cosine_precision@5 | 0.188 |
| cosine_precision@10 | 0.0962 |
| cosine_recall@1 | 0.836 |
| cosine_recall@3 | 0.9212 |
| cosine_recall@5 | 0.94 |
| cosine_recall@10 | 0.9624 |
| **cosine_ndcg@10** | **0.9018** |
| cosine_mrr@10 | 0.8821 |
| cosine_map@100 | 0.8832 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,461 training samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.18 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 39.65 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| question | answer |
|:------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>sektor industri dikaitkan konflik lingkungan</code> | <code>sektor industri dikaitkan konflik lingkungan pertambangan energi fosil biomassa pemanfaatan lahan pengelolaan air sektorsektor mencakup 67 konflik lingkungan terdokumentasi atlas keadilan lingkungan</code> |
| <code>ilmu teknik lingkungan berbeda teknik lingkungan ilmu lingkungan</code> | <code>ilmu teknik lingkungan memiliki mata kuliah teknik lingkungan dibandingkan ilmu lingkungan mata kuliah mengikuti kurikulum teknik lingkungan kuliah mahasiswa teknik lingkungan memilih bidangbidang desain fasilitas penyimpanan nuklir bioreaktor bakteri kebijakan lingkungan mahasiswa teknik lingkungan berfokus pembangunan fasilitas pengolahan penilaian dampak lingkungan mitigasi polusi udara</code> |
| <code>perusahaan manakah kali menemukan minyak nigeria</code> | <code>shellbp menemukan minyak nigeria oloibiri 1956</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,384 evaluation samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 8.85 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 39.69 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| question | answer |
|:-----------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>dampak potensial perubahan iklim ketersediaan air somalia</code> | <code>proyeksi ketersediaan air somalia berdasarkan skenario emisi mempertimbangkan pertumbuhan populasi model peningkatan sejalan proyeksi curah hujan mempertimbangkan proyeksi pertumbuhan populasi ketersediaan air kapita berkurang setengahnya 2080 berdasarkan skenario emisi rcp26 rcp60 ketidakpastian seputar volume air tersedia diproyeksikan</code> |
| <code>peran neeri rencana implementasi nasional nip pops</code> | <code>neeri memainkan peran organisasi mitra rencana implementasi nasional nip pop india berkontribusi upaya negara mengatasi polutan organik persisten</code> |
| <code>perubahan iklim mempengaruhi pertanian connecticut</code> | <code>suhu hangat mengurangi hasil industri susu connecticut bernilai 70 juta sapi makan menghasilkan susu cuaca panas peternakan dirugikan harihari panas kekeringan banjir mengurangi hasil panen menunda tanggal tanam peternakan diuntungkan musim tanam efek pemupukan karbon dioksida</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | indoSBERT-large-eval_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:-----------------------------------:|
| 0 | 0 | - | - | 0.6184 |
| 0.2475 | 100 | 0.3999 | 0.1869 | 0.7909 |
| 0.4950 | 200 | 0.1581 | 0.1060 | 0.8580 |
| 0.7426 | 300 | 0.1107 | 0.0884 | 0.8738 |
| 0.9901 | 400 | 0.1028 | 0.0822 | 0.8872 |
| 1.2376 | 500 | 0.0784 | 0.0694 | 0.8886 |
| 1.4851 | 600 | 0.015 | 0.0764 | 0.8891 |
| 1.7327 | 700 | 0.0052 | 0.0757 | 0.8921 |
| 1.9802 | 800 | 0.0061 | 0.0691 | 0.8914 |
| 2.2277 | 900 | 0.0051 | 0.0723 | 0.8943 |
| 2.4752 | 1000 | 0.0052 | 0.0709 | 0.8950 |
| 2.7228 | 1100 | 0.0013 | 0.0729 | 0.8968 |
| 2.9703 | 1200 | 0.001 | 0.0703 | 0.8984 |
| 3.2178 | 1300 | 0.0019 | 0.0649 | 0.9002 |
| 3.4653 | 1400 | 0.0007 | 0.0654 | 0.8989 |
| 3.7129 | 1500 | 0.0004 | 0.0668 | 0.8997 |
| 3.9604 | 1600 | 0.0005 | 0.0681 | 0.9002 |
| 4.2079 | 1700 | 0.0004 | 0.0676 | 0.9016 |
| 4.4554 | 1800 | 0.001 | 0.0666 | 0.9012 |
| 4.7030 | 1900 | 0.0003 | 0.0667 | 0.9012 |
| 4.9505 | 2000 | 0.0003 | 0.0670 | 0.9018 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
CoolSpring/Qwen2-0.5B-Abyme-merge2 | CoolSpring | "2024-08-21T11:39:50Z" | 51 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:CoolSpring/Qwen2-0.5B-Abyme",
"base_model:merge:CoolSpring/Qwen2-0.5B-Abyme",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:merge:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-27T11:41:43Z" | ---
license: apache-2.0
base_model:
- CoolSpring/Qwen2-0.5B-Abyme
- Qwen/Qwen2-0.5B-Instruct
tags:
- merge
- mergekit
- lazymergekit
---
# Qwen2-0.5B-Abyme-merge2
Qwen2-0.5B-Abyme-merge2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: CoolSpring/Qwen2-0.5B-Abyme
- model: Qwen/Qwen2-0.5B-Instruct
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
base_model: CoolSpring/Qwen2-0.5B-Abyme
dtype: bfloat16
tokenizer_source: union
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CoolSpring/Qwen2-0.5B-Abyme-merge2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
TheMindExpansionNetwork/Pixel-Princess-Ravebabe-1111-lora-AUDIO | TheMindExpansionNetwork | "2025-03-29T05:07:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-29T05:04:33Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sealks/ddpm-celebahq-finetuned-butterflies-2epochs | sealks | "2024-05-14T01:44:29Z" | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2024-05-14T01:42:38Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('sealks/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
davidschulte/ESM_kejian__codeparrot-train-more-filter-3.3b-cleaned_default | davidschulte | "2025-03-26T13:57:34Z" | 15 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T15:01:03Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM kejian/codeparrot-train-more-filter-3.3b-cleaned
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** kejian/codeparrot-train-more-filter-3.3b-cleaned
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** kejian/codeparrot-train-more-filter-3.3b-cleaned
- **Subset [optional]:** default
- **Text Column:** org_text
- **Label Column:** avg_score
- **Dataset Split:** train
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
mradermacher/Unaligned-Thinker-PHI-4-GGUF | mradermacher | "2025-01-17T16:00:25Z" | 426 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"trl",
"phi4",
"phi-4",
"unsloth",
"sft",
"en",
"dataset:fhai50032/Unaligned-Thinking-o1",
"base_model:fhai50032/Unaligned-Thinker-PHI-4",
"base_model:quantized:fhai50032/Unaligned-Thinker-PHI-4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-17T13:43:00Z" | ---
base_model: fhai50032/Unaligned-Thinker-PHI-4
datasets:
- fhai50032/Unaligned-Thinking-o1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
- llama
- trl
- phi4
- phi-4
- unsloth
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/fhai50032/Unaligned-Thinker-PHI-4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q2_K.gguf) | Q2_K | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q3_K_M.gguf) | Q3_K_M | 7.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.IQ4_XS.gguf) | IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q4_K_M.gguf) | Q4_K_M | 9.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q5_K_M.gguf) | Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Unaligned-Thinker-PHI-4-GGUF/resolve/main/Unaligned-Thinker-PHI-4.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pasukka/detail-classifier-new-app-v.1 | pasukka | "2025-02-24T09:54:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-24T09:53:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits