modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 06:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DiederikMartens/mBERT_sa_cv_12_fold2 | DiederikMartens | 2024-05-28T06:11:59Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:58:27Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_12_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_12_fold2
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5507
- F1: 0.5918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4277 | 0.5041 |
| No log | 2.0 | 452 | 0.4339 | 0.5226 |
| 0.4339 | 3.0 | 678 | 0.5507 | 0.5918 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
SenseLLM/FIM-SE-CL-7B | SenseLLM | 2024-05-28T06:11:56Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2405.17103",
"arxiv:2207.14255",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T05:49:19Z | ---
license: apache-2.0
language:
- en
---
## Empowering Character-level Text Infilling by Eliminating Sub-Tokens
<p align="center">
<a href="https://arxiv.org/abs/2405.17103">📄 Paper</a> •
<a href="https://github.com/SenseLLM/FIM-SE">🏠 Repo</a> •
<a href="https://huggingface.co/SenseLLM/FIM-SE-CL-13B">🤖 Models</a>
</p>
## Introduction
FIM-SE stands for Fill-In-the-Middle with both Starting and Ending character constraints. The proposed method addresses character-level infilling tasks by utilizing a line-level format to avoid predicting any sub-token in inference.

<hr>
## Models
| Model | Checkpoint | Size | License|
|:------|:-----------|:-----|:-------|
| FIM-SE-CL-7B | 🤗 [HF Link](https://huggingface.co/SenseLLM/FIM-SE-CL-7B) | 7B | [Llama2](https://ai.meta.com/llama/license/) |
| FIM-SE-CL-34B | 🤗 [HF Link](https://huggingface.co/SenseLLM/FIM-SE-CL-34B) | 13B | [Llama2](https://ai.meta.com/llama/license/) |
| FIM-SE-SC-1B | 🤗 [HF Link](https://huggingface.co/SenseLLM/FIM-SE-SC-1B) | 1B | [StarCoder](https://github.com/bigcode-project/starcoder/blob/main/LICENSE) |
| FIM-SE-SC-15B | 🤗 [HF Link](https://huggingface.co/SenseLLM/FIM-SE-SC-15B) | 15B | [StarCoder](https://github.com/bigcode-project/starcoder/blob/main/LICENSE) |
## How to Use
#### Prompt Format
As shown in the figure, the prompt is organized as
```text
<PRE>R-Prefix<SUF>R-Suffix<START>L-Prefix<END>F-Suffix<MID>
```
#### Inference Code
Please refer to our [GitHub Repo](https://github.com/SenseLLM/FIM-SE) for more technical details.
## Citation
If you find this repo useful for your research, please kindly cite our paper:
```
@misc{ren2024empowering,
title={Empowering Character-level Text Infilling by Eliminating Sub-Tokens},
author={Houxing Ren and Mingjie Zhan and Zhongyuan Wu and Hongsheng Li},
year={2024},
eprint={2405.17103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
We thank the following amazing projects that truly inspired us:
- [FIM](https://arxiv.org/abs/2207.14255) |
DiederikMartens/tsBERT_sa_cv_12_fold2 | DiederikMartens | 2024-05-28T06:11:49Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:58:20Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_12_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_12_fold2
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4553
- F1: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3550 | 0.5693 |
| No log | 2.0 | 452 | 0.3737 | 0.6332 |
| 0.3364 | 3.0 | 678 | 0.4553 | 0.6531 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Codebee/Bert-FD | Codebee | 2024-05-28T06:10:17Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-05-28T05:54:46Z | ---
language: zh
---
# Bert-base-chinese
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
### Model Description
This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
- **Developed by:** HuggingFace team
- **Model Type:** Fill-Mask
- **Language(s):** Chinese
- **License:** [More Information needed]
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
### Model Sources
- **Paper:** [BERT](https://arxiv.org/abs/1810.04805)
## Uses
#### Direct Use
This model can be used for masked language modeling
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Training
#### Training Procedure
* **type_vocab_size:** 2
* **vocab_size:** 21128
* **num_hidden_layers:** 12
#### Training Data
[More Information Needed]
## Evaluation
#### Results
[More Information Needed]
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
```
|
thanhduc1180/vistral_checkpoint_1k9_vietnews | thanhduc1180 | 2024-05-28T06:10:16Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Viet-Mistral/Vistral-7B-Chat",
"base_model:adapter:Viet-Mistral/Vistral-7B-Chat",
"region:us"
]
| null | 2024-05-27T20:01:10Z | ---
library_name: peft
base_model: Viet-Mistral/Vistral-7B-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Devanshj7/results | Devanshj7 | 2024-05-28T06:06:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-28T06:06:16Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
AdnanRiaz107/CodePhi-3-mini-4k-instruct-pythonAPPSWO | AdnanRiaz107 | 2024-05-28T06:02:38Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:AdnanRiaz107/CodePhi-3-mini-4k-instruct-python",
"base_model:finetune:AdnanRiaz107/CodePhi-3-mini-4k-instruct-python",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T05:59:57Z | ---
license: mit
base_model: AdnanRiaz107/CodePhi-3-mini-4k-instruct-python
tags:
- generated_from_trainer
model-index:
- name: CodePhi-3-mini-4k-instruct-pythonAPPSWO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodePhi-3-mini-4k-instruct-pythonAPPSWO
This model is a fine-tuned version of [AdnanRiaz107/CodePhi-3-mini-4k-instruct-python](https://huggingface.co/AdnanRiaz107/CodePhi-3-mini-4k-instruct-python) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.641 | 1.0 | 2 | 0.9014 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LLMproj1/mypersona-llama3-8b | LLMproj1 | 2024-05-28T06:01:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-bnb-4bit",
"region:us"
]
| null | 2024-05-28T05:59:35Z | ---
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
forwarder1121/results | forwarder1121 | 2024-05-28T06:00:25Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-19T08:50:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
datasets:
- emotion
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2495 | 1.0 | 1000 | 0.1960 |
| 0.1503 | 2.0 | 2000 | 0.1748 |
| 0.0958 | 3.0 | 3000 | 0.1622 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_12_fold1 | DiederikMartens | 2024-05-28T05:58:14Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:44:43Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_12_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_12_fold1
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3862
- F1: 0.6784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3294 | 0.5782 |
| No log | 2.0 | 452 | 0.3862 | 0.6784 |
| 0.3345 | 3.0 | 678 | 0.4967 | 0.6665 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
John6666/pony-pencil-xl-v102-sdxl | John6666 | 2024-05-28T05:56:08Z | 33 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-05-28T05:43:02Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://huggingface.co/bluepen5805/pony_pencil-XL) or on [Civitai](https://civitai.com/models/432249?modelVersionId=482673).
|
Shubham014/Mistral-7B-Instruct-v0.2-document-parsar | Shubham014 | 2024-05-28T05:54:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-05-28T05:54:54Z | ---
license: apache-2.0
---
|
DiederikMartens/gBERT_sa_cv_12_fold1 | DiederikMartens | 2024-05-28T05:54:29Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:43:14Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_12_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_12_fold1
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4368
- F1: 0.6532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3275 | 0.6009 |
| No log | 2.0 | 452 | 0.3741 | 0.5998 |
| 0.3485 | 3.0 | 678 | 0.4368 | 0.6532 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
minmingzhu02/Mistral-7B-dpo-habana | minmingzhu02 | 2024-05-28T05:51:20Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"optimum_habana",
"region:us"
]
| null | 2024-05-28T05:49:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Madhumita19/merged-mistralfinetuned-v2.0-1 | Madhumita19 | 2024-05-28T05:48:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T05:45:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DiederikMartens/mBERT_sa_cv_12_fold0 | DiederikMartens | 2024-05-28T05:44:41Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:30:27Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_12_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_12_fold0
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4448
- F1: 0.5223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5353 | 0.3204 |
| No log | 2.0 | 452 | 0.4513 | 0.5155 |
| 0.5367 | 3.0 | 678 | 0.4448 | 0.5223 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Bhardawaj/slc-opt-125-gptq | Bhardawaj | 2024-05-28T05:42:13Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-05-28T05:42:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hemg/detr-resnet-50_finetuned_cppe5 | Hemg | 2024-05-28T05:32:56Z | 188 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-05-27T12:28:35Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
T3Q-LLM/T3Q-LMM3-MK-v2.0 | T3Q-LLM | 2024-05-28T05:32:04Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T02:20:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Evaluation
f-causal-experimental (pretrained=T3Q-LLM/T3Q-LMM3-MK-v2.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.6353|± |0.0129|
| | |macro_f1|0.5847|± |0.0136|
|kobest_copa | 0|acc |0.6730|± |0.0148|
| | |macro_f1|0.6723|± |0.0148|
|kobest_hellaswag| 0|acc |0.4720|± |0.0223|
| | |acc_norm|0.5920|± |0.0220|
| | |macro_f1|0.4696|± |0.0224|
|kobest_sentineg | 0|acc |0.4962|± |0.0251|
| | |macro_f1|0.3360|± |0.0125| |
agier9/UAE-Large-V1-Q5_K_S-GGUF | agier9 | 2024-05-28T05:30:34Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"sentence_embedding",
"feature_extraction",
"transformers",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
]
| null | 2024-05-28T05:30:31Z | ---
language:
- en
license: mit
tags:
- mteb
- sentence_embedding
- feature_extraction
- sentence-transformers
- transformers
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: UAE-Large-V1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.55223880597015
- type: ap
value: 38.264070815317794
- type: f1
value: 69.40977934769845
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.84267499999999
- type: ap
value: 89.57568507997713
- type: f1
value: 92.82590734337774
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.292
- type: f1
value: 47.90257816032778
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.105
- type: map_at_10
value: 58.181000000000004
- type: map_at_100
value: 58.653999999999996
- type: map_at_1000
value: 58.657000000000004
- type: map_at_3
value: 54.386
- type: map_at_5
value: 56.757999999999996
- type: mrr_at_1
value: 42.745
- type: mrr_at_10
value: 58.437
- type: mrr_at_100
value: 58.894999999999996
- type: mrr_at_1000
value: 58.897999999999996
- type: mrr_at_3
value: 54.635
- type: mrr_at_5
value: 56.99999999999999
- type: ndcg_at_1
value: 42.105
- type: ndcg_at_10
value: 66.14999999999999
- type: ndcg_at_100
value: 68.048
- type: ndcg_at_1000
value: 68.11399999999999
- type: ndcg_at_3
value: 58.477000000000004
- type: ndcg_at_5
value: 62.768
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 23.447000000000003
- type: precision_at_5
value: 16.159000000000002
- type: recall_at_1
value: 42.105
- type: recall_at_10
value: 91.11
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 70.341
- type: recall_at_5
value: 80.797
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 49.02580759154173
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.093601280163554
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.19590406875427
- type: mrr
value: 77.09547992788991
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.86678362843676
- type: cos_sim_spearman
value: 86.1423242570783
- type: euclidean_pearson
value: 85.98994198511751
- type: euclidean_spearman
value: 86.48209103503942
- type: manhattan_pearson
value: 85.6446436316182
- type: manhattan_spearman
value: 86.21039809734357
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.69155844155844
- type: f1
value: 87.68109381943547
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.37501687500394
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 37.23401405155885
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.232
- type: map_at_10
value: 41.404999999999994
- type: map_at_100
value: 42.896
- type: map_at_1000
value: 43.028
- type: map_at_3
value: 37.925
- type: map_at_5
value: 39.865
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.969
- type: mrr_at_100
value: 47.684
- type: mrr_at_1000
value: 47.731
- type: mrr_at_3
value: 44.063
- type: mrr_at_5
value: 45.908
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.887
- type: ndcg_at_100
value: 53.357
- type: ndcg_at_1000
value: 55.376999999999995
- type: ndcg_at_3
value: 42.588
- type: ndcg_at_5
value: 45.132
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.17
- type: precision_at_100
value: 1.4909999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.315
- type: precision_at_5
value: 14.793000000000001
- type: recall_at_1
value: 30.232
- type: recall_at_10
value: 60.67399999999999
- type: recall_at_100
value: 83.628
- type: recall_at_1000
value: 96.209
- type: recall_at_3
value: 45.48
- type: recall_at_5
value: 52.354
- type: map_at_1
value: 32.237
- type: map_at_10
value: 42.829
- type: map_at_100
value: 44.065
- type: map_at_1000
value: 44.199
- type: map_at_3
value: 39.885999999999996
- type: map_at_5
value: 41.55
- type: mrr_at_1
value: 40.064
- type: mrr_at_10
value: 48.611
- type: mrr_at_100
value: 49.245
- type: mrr_at_1000
value: 49.29
- type: mrr_at_3
value: 46.561
- type: mrr_at_5
value: 47.771
- type: ndcg_at_1
value: 40.064
- type: ndcg_at_10
value: 48.388
- type: ndcg_at_100
value: 52.666999999999994
- type: ndcg_at_1000
value: 54.67100000000001
- type: ndcg_at_3
value: 44.504
- type: ndcg_at_5
value: 46.303
- type: precision_at_1
value: 40.064
- type: precision_at_10
value: 9.051
- type: precision_at_100
value: 1.4500000000000002
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 21.444
- type: precision_at_5
value: 15.045
- type: recall_at_1
value: 32.237
- type: recall_at_10
value: 57.943999999999996
- type: recall_at_100
value: 75.98700000000001
- type: recall_at_1000
value: 88.453
- type: recall_at_3
value: 46.268
- type: recall_at_5
value: 51.459999999999994
- type: map_at_1
value: 38.797
- type: map_at_10
value: 51.263000000000005
- type: map_at_100
value: 52.333
- type: map_at_1000
value: 52.393
- type: map_at_3
value: 47.936
- type: map_at_5
value: 49.844
- type: mrr_at_1
value: 44.389
- type: mrr_at_10
value: 54.601
- type: mrr_at_100
value: 55.300000000000004
- type: mrr_at_1000
value: 55.333
- type: mrr_at_3
value: 52.068999999999996
- type: mrr_at_5
value: 53.627
- type: ndcg_at_1
value: 44.389
- type: ndcg_at_10
value: 57.193000000000005
- type: ndcg_at_100
value: 61.307
- type: ndcg_at_1000
value: 62.529
- type: ndcg_at_3
value: 51.607
- type: ndcg_at_5
value: 54.409
- type: precision_at_1
value: 44.389
- type: precision_at_10
value: 9.26
- type: precision_at_100
value: 1.222
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 23.03
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.797
- type: recall_at_10
value: 71.449
- type: recall_at_100
value: 88.881
- type: recall_at_1000
value: 97.52
- type: recall_at_3
value: 56.503
- type: recall_at_5
value: 63.392
- type: map_at_1
value: 27.291999999999998
- type: map_at_10
value: 35.65
- type: map_at_100
value: 36.689
- type: map_at_1000
value: 36.753
- type: map_at_3
value: 32.995000000000005
- type: map_at_5
value: 34.409
- type: mrr_at_1
value: 29.04
- type: mrr_at_10
value: 37.486000000000004
- type: mrr_at_100
value: 38.394
- type: mrr_at_1000
value: 38.445
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.305
- type: ndcg_at_1
value: 29.04
- type: ndcg_at_10
value: 40.613
- type: ndcg_at_100
value: 45.733000000000004
- type: ndcg_at_1000
value: 47.447
- type: ndcg_at_3
value: 35.339999999999996
- type: ndcg_at_5
value: 37.706
- type: precision_at_1
value: 29.04
- type: precision_at_10
value: 6.192
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 14.802000000000001
- type: precision_at_5
value: 10.305
- type: recall_at_1
value: 27.291999999999998
- type: recall_at_10
value: 54.25299999999999
- type: recall_at_100
value: 77.773
- type: recall_at_1000
value: 90.795
- type: recall_at_3
value: 39.731
- type: recall_at_5
value: 45.403999999999996
- type: map_at_1
value: 18.326
- type: map_at_10
value: 26.290999999999997
- type: map_at_100
value: 27.456999999999997
- type: map_at_1000
value: 27.583000000000002
- type: map_at_3
value: 23.578
- type: map_at_5
value: 25.113000000000003
- type: mrr_at_1
value: 22.637
- type: mrr_at_10
value: 31.139
- type: mrr_at_100
value: 32.074999999999996
- type: mrr_at_1000
value: 32.147
- type: mrr_at_3
value: 28.483000000000004
- type: mrr_at_5
value: 29.963
- type: ndcg_at_1
value: 22.637
- type: ndcg_at_10
value: 31.717000000000002
- type: ndcg_at_100
value: 37.201
- type: ndcg_at_1000
value: 40.088
- type: ndcg_at_3
value: 26.686
- type: ndcg_at_5
value: 29.076999999999998
- type: precision_at_1
value: 22.637
- type: precision_at_10
value: 5.7090000000000005
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.894
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 18.326
- type: recall_at_10
value: 43.824999999999996
- type: recall_at_100
value: 67.316
- type: recall_at_1000
value: 87.481
- type: recall_at_3
value: 29.866999999999997
- type: recall_at_5
value: 35.961999999999996
- type: map_at_1
value: 29.875
- type: map_at_10
value: 40.458
- type: map_at_100
value: 41.772
- type: map_at_1000
value: 41.882999999999996
- type: map_at_3
value: 37.086999999999996
- type: map_at_5
value: 39.153
- type: mrr_at_1
value: 36.381
- type: mrr_at_10
value: 46.190999999999995
- type: mrr_at_100
value: 46.983999999999995
- type: mrr_at_1000
value: 47.032000000000004
- type: mrr_at_3
value: 43.486999999999995
- type: mrr_at_5
value: 45.249
- type: ndcg_at_1
value: 36.381
- type: ndcg_at_10
value: 46.602
- type: ndcg_at_100
value: 51.885999999999996
- type: ndcg_at_1000
value: 53.895
- type: ndcg_at_3
value: 41.155
- type: ndcg_at_5
value: 44.182
- type: precision_at_1
value: 36.381
- type: precision_at_10
value: 8.402
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 19.346
- type: precision_at_5
value: 14.09
- type: recall_at_1
value: 29.875
- type: recall_at_10
value: 59.065999999999995
- type: recall_at_100
value: 80.923
- type: recall_at_1000
value: 93.927
- type: recall_at_3
value: 44.462
- type: recall_at_5
value: 51.89
- type: map_at_1
value: 24.94
- type: map_at_10
value: 35.125
- type: map_at_100
value: 36.476
- type: map_at_1000
value: 36.579
- type: map_at_3
value: 31.840000000000003
- type: map_at_5
value: 33.647
- type: mrr_at_1
value: 30.936000000000003
- type: mrr_at_10
value: 40.637
- type: mrr_at_100
value: 41.471000000000004
- type: mrr_at_1000
value: 41.525
- type: mrr_at_3
value: 38.013999999999996
- type: mrr_at_5
value: 39.469
- type: ndcg_at_1
value: 30.936000000000003
- type: ndcg_at_10
value: 41.295
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 49.183
- type: ndcg_at_3
value: 35.811
- type: ndcg_at_5
value: 38.306000000000004
- type: precision_at_1
value: 30.936000000000003
- type: precision_at_10
value: 7.728
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.237
- type: precision_at_5
value: 12.42
- type: recall_at_1
value: 24.94
- type: recall_at_10
value: 54.235
- type: recall_at_100
value: 78.314
- type: recall_at_1000
value: 93.973
- type: recall_at_3
value: 38.925
- type: recall_at_5
value: 45.505
- type: map_at_1
value: 26.250833333333333
- type: map_at_10
value: 35.46875
- type: map_at_100
value: 36.667
- type: map_at_1000
value: 36.78025
- type: map_at_3
value: 32.56733333333334
- type: map_at_5
value: 34.20333333333333
- type: mrr_at_1
value: 30.8945
- type: mrr_at_10
value: 39.636833333333335
- type: mrr_at_100
value: 40.46508333333333
- type: mrr_at_1000
value: 40.521249999999995
- type: mrr_at_3
value: 37.140166666666666
- type: mrr_at_5
value: 38.60999999999999
- type: ndcg_at_1
value: 30.8945
- type: ndcg_at_10
value: 40.93441666666667
- type: ndcg_at_100
value: 46.062416666666664
- type: ndcg_at_1000
value: 48.28341666666667
- type: ndcg_at_3
value: 35.97575
- type: ndcg_at_5
value: 38.3785
- type: precision_at_1
value: 30.8945
- type: precision_at_10
value: 7.180250000000001
- type: precision_at_100
value: 1.1468333333333334
- type: precision_at_1000
value: 0.15283333333333332
- type: precision_at_3
value: 16.525583333333334
- type: precision_at_5
value: 11.798333333333332
- type: recall_at_1
value: 26.250833333333333
- type: recall_at_10
value: 52.96108333333333
- type: recall_at_100
value: 75.45908333333334
- type: recall_at_1000
value: 90.73924999999998
- type: recall_at_3
value: 39.25483333333333
- type: recall_at_5
value: 45.37950000000001
- type: map_at_1
value: 24.595
- type: map_at_10
value: 31.747999999999998
- type: map_at_100
value: 32.62
- type: map_at_1000
value: 32.713
- type: map_at_3
value: 29.48
- type: map_at_5
value: 30.635
- type: mrr_at_1
value: 27.607
- type: mrr_at_10
value: 34.449000000000005
- type: mrr_at_100
value: 35.182
- type: mrr_at_1000
value: 35.254000000000005
- type: mrr_at_3
value: 32.413
- type: mrr_at_5
value: 33.372
- type: ndcg_at_1
value: 27.607
- type: ndcg_at_10
value: 36.041000000000004
- type: ndcg_at_100
value: 40.514
- type: ndcg_at_1000
value: 42.851
- type: ndcg_at_3
value: 31.689
- type: ndcg_at_5
value: 33.479
- type: precision_at_1
value: 27.607
- type: precision_at_10
value: 5.66
- type: precision_at_100
value: 0.868
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.264
- type: recall_at_1
value: 24.595
- type: recall_at_10
value: 46.79
- type: recall_at_100
value: 67.413
- type: recall_at_1000
value: 84.753
- type: recall_at_3
value: 34.644999999999996
- type: recall_at_5
value: 39.09
- type: map_at_1
value: 17.333000000000002
- type: map_at_10
value: 24.427
- type: map_at_100
value: 25.576
- type: map_at_1000
value: 25.692999999999998
- type: map_at_3
value: 22.002
- type: map_at_5
value: 23.249
- type: mrr_at_1
value: 20.716
- type: mrr_at_10
value: 28.072000000000003
- type: mrr_at_100
value: 29.067
- type: mrr_at_1000
value: 29.137
- type: mrr_at_3
value: 25.832
- type: mrr_at_5
value: 27.045
- type: ndcg_at_1
value: 20.716
- type: ndcg_at_10
value: 29.109
- type: ndcg_at_100
value: 34.797
- type: ndcg_at_1000
value: 37.503
- type: ndcg_at_3
value: 24.668
- type: ndcg_at_5
value: 26.552999999999997
- type: precision_at_1
value: 20.716
- type: precision_at_10
value: 5.351
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 11.584999999999999
- type: precision_at_5
value: 8.362
- type: recall_at_1
value: 17.333000000000002
- type: recall_at_10
value: 39.604
- type: recall_at_100
value: 65.525
- type: recall_at_1000
value: 84.651
- type: recall_at_3
value: 27.199
- type: recall_at_5
value: 32.019
- type: map_at_1
value: 26.342
- type: map_at_10
value: 35.349000000000004
- type: map_at_100
value: 36.443
- type: map_at_1000
value: 36.548
- type: map_at_3
value: 32.307
- type: map_at_5
value: 34.164
- type: mrr_at_1
value: 31.063000000000002
- type: mrr_at_10
value: 39.703
- type: mrr_at_100
value: 40.555
- type: mrr_at_1000
value: 40.614
- type: mrr_at_3
value: 37.141999999999996
- type: mrr_at_5
value: 38.812000000000005
- type: ndcg_at_1
value: 31.063000000000002
- type: ndcg_at_10
value: 40.873
- type: ndcg_at_100
value: 45.896
- type: ndcg_at_1000
value: 48.205999999999996
- type: ndcg_at_3
value: 35.522
- type: ndcg_at_5
value: 38.419
- type: precision_at_1
value: 31.063000000000002
- type: precision_at_10
value: 6.866
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 16.014
- type: precision_at_5
value: 11.604000000000001
- type: recall_at_1
value: 26.342
- type: recall_at_10
value: 53.40200000000001
- type: recall_at_100
value: 75.251
- type: recall_at_1000
value: 91.13799999999999
- type: recall_at_3
value: 39.103
- type: recall_at_5
value: 46.357
- type: map_at_1
value: 23.71
- type: map_at_10
value: 32.153999999999996
- type: map_at_100
value: 33.821
- type: map_at_1000
value: 34.034
- type: map_at_3
value: 29.376
- type: map_at_5
value: 30.878
- type: mrr_at_1
value: 28.458
- type: mrr_at_10
value: 36.775999999999996
- type: mrr_at_100
value: 37.804
- type: mrr_at_1000
value: 37.858999999999995
- type: mrr_at_3
value: 34.123999999999995
- type: mrr_at_5
value: 35.596
- type: ndcg_at_1
value: 28.458
- type: ndcg_at_10
value: 37.858999999999995
- type: ndcg_at_100
value: 44.194
- type: ndcg_at_1000
value: 46.744
- type: ndcg_at_3
value: 33.348
- type: ndcg_at_5
value: 35.448
- type: precision_at_1
value: 28.458
- type: precision_at_10
value: 7.4510000000000005
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.23700000000000002
- type: precision_at_3
value: 15.809999999999999
- type: precision_at_5
value: 11.462
- type: recall_at_1
value: 23.71
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 77.134
- type: recall_at_1000
value: 93.001
- type: recall_at_3
value: 35.480000000000004
- type: recall_at_5
value: 41.19
- type: map_at_1
value: 21.331
- type: map_at_10
value: 28.926000000000002
- type: map_at_100
value: 29.855999999999998
- type: map_at_1000
value: 29.957
- type: map_at_3
value: 26.395999999999997
- type: map_at_5
value: 27.933000000000003
- type: mrr_at_1
value: 23.105
- type: mrr_at_10
value: 31.008000000000003
- type: mrr_at_100
value: 31.819999999999997
- type: mrr_at_1000
value: 31.887999999999998
- type: mrr_at_3
value: 28.466
- type: mrr_at_5
value: 30.203000000000003
- type: ndcg_at_1
value: 23.105
- type: ndcg_at_10
value: 33.635999999999996
- type: ndcg_at_100
value: 38.277
- type: ndcg_at_1000
value: 40.907
- type: ndcg_at_3
value: 28.791
- type: ndcg_at_5
value: 31.528
- type: precision_at_1
value: 23.105
- type: precision_at_10
value: 5.323
- type: precision_at_100
value: 0.815
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.384
- type: precision_at_5
value: 9.02
- type: recall_at_1
value: 21.331
- type: recall_at_10
value: 46.018
- type: recall_at_100
value: 67.364
- type: recall_at_1000
value: 86.97
- type: recall_at_3
value: 33.395
- type: recall_at_5
value: 39.931
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.011000000000003
- type: map_at_10
value: 28.816999999999997
- type: map_at_100
value: 30.761
- type: map_at_1000
value: 30.958000000000002
- type: map_at_3
value: 24.044999999999998
- type: map_at_5
value: 26.557
- type: mrr_at_1
value: 38.696999999999996
- type: mrr_at_10
value: 50.464
- type: mrr_at_100
value: 51.193999999999996
- type: mrr_at_1000
value: 51.219
- type: mrr_at_3
value: 47.339999999999996
- type: mrr_at_5
value: 49.346000000000004
- type: ndcg_at_1
value: 38.696999999999996
- type: ndcg_at_10
value: 38.53
- type: ndcg_at_100
value: 45.525
- type: ndcg_at_1000
value: 48.685
- type: ndcg_at_3
value: 32.282
- type: ndcg_at_5
value: 34.482
- type: precision_at_1
value: 38.696999999999996
- type: precision_at_10
value: 11.895999999999999
- type: precision_at_100
value: 1.95
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 24.038999999999998
- type: precision_at_5
value: 18.332
- type: recall_at_1
value: 17.011000000000003
- type: recall_at_10
value: 44.452999999999996
- type: recall_at_100
value: 68.223
- type: recall_at_1000
value: 85.653
- type: recall_at_3
value: 28.784
- type: recall_at_5
value: 35.66
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.516
- type: map_at_10
value: 21.439
- type: map_at_100
value: 31.517
- type: map_at_1000
value: 33.267
- type: map_at_3
value: 15.004999999999999
- type: map_at_5
value: 17.793999999999997
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 79.071
- type: mrr_at_100
value: 79.325
- type: mrr_at_1000
value: 79.33
- type: mrr_at_3
value: 77.708
- type: mrr_at_5
value: 78.546
- type: ndcg_at_1
value: 58.62500000000001
- type: ndcg_at_10
value: 44.889
- type: ndcg_at_100
value: 50.536
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 49.32
- type: ndcg_at_5
value: 46.775
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 36.175000000000004
- type: precision_at_100
value: 11.940000000000001
- type: precision_at_1000
value: 2.178
- type: precision_at_3
value: 53.583000000000006
- type: precision_at_5
value: 45.550000000000004
- type: recall_at_1
value: 9.516
- type: recall_at_10
value: 27.028000000000002
- type: recall_at_100
value: 57.581
- type: recall_at_1000
value: 80.623
- type: recall_at_3
value: 16.313
- type: recall_at_5
value: 20.674
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.74999999999999
- type: f1
value: 46.46706502669774
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 77.266
- type: map_at_10
value: 84.89999999999999
- type: map_at_100
value: 85.109
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 83.898
- type: map_at_5
value: 84.541
- type: mrr_at_1
value: 83.138
- type: mrr_at_10
value: 89.37
- type: mrr_at_100
value: 89.432
- type: mrr_at_1000
value: 89.43299999999999
- type: mrr_at_3
value: 88.836
- type: mrr_at_5
value: 89.21
- type: ndcg_at_1
value: 83.138
- type: ndcg_at_10
value: 88.244
- type: ndcg_at_100
value: 88.98700000000001
- type: ndcg_at_1000
value: 89.21900000000001
- type: ndcg_at_3
value: 86.825
- type: ndcg_at_5
value: 87.636
- type: precision_at_1
value: 83.138
- type: precision_at_10
value: 10.47
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 20.36
- type: recall_at_1
value: 77.266
- type: recall_at_10
value: 94.063
- type: recall_at_100
value: 96.993
- type: recall_at_1000
value: 98.414
- type: recall_at_3
value: 90.228
- type: recall_at_5
value: 92.328
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.319
- type: map_at_10
value: 36.943
- type: map_at_100
value: 38.951
- type: map_at_1000
value: 39.114
- type: map_at_3
value: 32.82
- type: map_at_5
value: 34.945
- type: mrr_at_1
value: 44.135999999999996
- type: mrr_at_10
value: 53.071999999999996
- type: mrr_at_100
value: 53.87
- type: mrr_at_1000
value: 53.90200000000001
- type: mrr_at_3
value: 50.77199999999999
- type: mrr_at_5
value: 52.129999999999995
- type: ndcg_at_1
value: 44.135999999999996
- type: ndcg_at_10
value: 44.836
- type: ndcg_at_100
value: 51.754
- type: ndcg_at_1000
value: 54.36
- type: ndcg_at_3
value: 41.658
- type: ndcg_at_5
value: 42.354
- type: precision_at_1
value: 44.135999999999996
- type: precision_at_10
value: 12.284
- type: precision_at_100
value: 1.952
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 27.828999999999997
- type: precision_at_5
value: 20.093
- type: recall_at_1
value: 22.319
- type: recall_at_10
value: 51.528
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 92.143
- type: recall_at_3
value: 38.641
- type: recall_at_5
value: 43.653999999999996
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.182
- type: map_at_10
value: 65.146
- type: map_at_100
value: 66.023
- type: map_at_1000
value: 66.078
- type: map_at_3
value: 61.617999999999995
- type: map_at_5
value: 63.82299999999999
- type: mrr_at_1
value: 80.365
- type: mrr_at_10
value: 85.79
- type: mrr_at_100
value: 85.963
- type: mrr_at_1000
value: 85.968
- type: mrr_at_3
value: 84.952
- type: mrr_at_5
value: 85.503
- type: ndcg_at_1
value: 80.365
- type: ndcg_at_10
value: 73.13499999999999
- type: ndcg_at_100
value: 76.133
- type: ndcg_at_1000
value: 77.151
- type: ndcg_at_3
value: 68.255
- type: ndcg_at_5
value: 70.978
- type: precision_at_1
value: 80.365
- type: precision_at_10
value: 15.359
- type: precision_at_100
value: 1.7690000000000001
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 44.024
- type: precision_at_5
value: 28.555999999999997
- type: recall_at_1
value: 40.182
- type: recall_at_10
value: 76.793
- type: recall_at_100
value: 88.474
- type: recall_at_1000
value: 95.159
- type: recall_at_3
value: 66.036
- type: recall_at_5
value: 71.391
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.7796
- type: ap
value: 89.24883716810874
- type: f1
value: 92.7706903433313
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.016
- type: map_at_10
value: 34.408
- type: map_at_100
value: 35.592
- type: map_at_1000
value: 35.64
- type: map_at_3
value: 30.459999999999997
- type: map_at_5
value: 32.721000000000004
- type: mrr_at_1
value: 22.593
- type: mrr_at_10
value: 34.993
- type: mrr_at_100
value: 36.113
- type: mrr_at_1000
value: 36.156
- type: mrr_at_3
value: 31.101
- type: mrr_at_5
value: 33.364
- type: ndcg_at_1
value: 22.579
- type: ndcg_at_10
value: 41.404999999999994
- type: ndcg_at_100
value: 47.018
- type: ndcg_at_1000
value: 48.211999999999996
- type: ndcg_at_3
value: 33.389
- type: ndcg_at_5
value: 37.425000000000004
- type: precision_at_1
value: 22.579
- type: precision_at_10
value: 6.59
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.241000000000001
- type: precision_at_5
value: 10.59
- type: recall_at_1
value: 22.016
- type: recall_at_10
value: 62.927
- type: recall_at_100
value: 88.72
- type: recall_at_1000
value: 97.80799999999999
- type: recall_at_3
value: 41.229
- type: recall_at_5
value: 50.88
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.01732786137711
- type: f1
value: 93.76353126402202
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.91746466028272
- type: f1
value: 57.715651682646765
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.5030262273033
- type: f1
value: 74.6693629986121
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.74781439139207
- type: f1
value: 79.96684171018774
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.2156206892017
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.180539484816137
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.51125957874274
- type: mrr
value: 33.777037359249995
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.248
- type: map_at_10
value: 15.340000000000002
- type: map_at_100
value: 19.591
- type: map_at_1000
value: 21.187
- type: map_at_3
value: 11.329
- type: map_at_5
value: 13.209999999999999
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 57.493
- type: mrr_at_100
value: 58.038999999999994
- type: mrr_at_1000
value: 58.07
- type: mrr_at_3
value: 55.36600000000001
- type: mrr_at_5
value: 56.635999999999996
- type: ndcg_at_1
value: 46.129999999999995
- type: ndcg_at_10
value: 38.653999999999996
- type: ndcg_at_100
value: 36.288
- type: ndcg_at_1000
value: 44.765
- type: ndcg_at_3
value: 43.553
- type: ndcg_at_5
value: 41.317
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 28.669
- type: precision_at_100
value: 9.158
- type: precision_at_1000
value: 2.207
- type: precision_at_3
value: 40.97
- type: precision_at_5
value: 35.604
- type: recall_at_1
value: 7.248
- type: recall_at_10
value: 19.46
- type: recall_at_100
value: 37.214000000000006
- type: recall_at_1000
value: 67.64099999999999
- type: recall_at_3
value: 12.025
- type: recall_at_5
value: 15.443999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.595000000000002
- type: map_at_10
value: 47.815999999999995
- type: map_at_100
value: 48.811
- type: map_at_1000
value: 48.835
- type: map_at_3
value: 43.225
- type: map_at_5
value: 46.017
- type: mrr_at_1
value: 35.689
- type: mrr_at_10
value: 50.341
- type: mrr_at_100
value: 51.044999999999995
- type: mrr_at_1000
value: 51.062
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.918
- type: ndcg_at_1
value: 35.66
- type: ndcg_at_10
value: 55.859
- type: ndcg_at_100
value: 59.864
- type: ndcg_at_1000
value: 60.419999999999995
- type: ndcg_at_3
value: 47.371
- type: ndcg_at_5
value: 51.995000000000005
- type: precision_at_1
value: 35.66
- type: precision_at_10
value: 9.27
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.63
- type: precision_at_5
value: 15.655
- type: recall_at_1
value: 31.595000000000002
- type: recall_at_10
value: 77.704
- type: recall_at_100
value: 94.774
- type: recall_at_1000
value: 98.919
- type: recall_at_3
value: 56.052
- type: recall_at_5
value: 66.623
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.489
- type: map_at_10
value: 85.411
- type: map_at_100
value: 86.048
- type: map_at_1000
value: 86.064
- type: map_at_3
value: 82.587
- type: map_at_5
value: 84.339
- type: mrr_at_1
value: 82.28
- type: mrr_at_10
value: 88.27199999999999
- type: mrr_at_100
value: 88.362
- type: mrr_at_1000
value: 88.362
- type: mrr_at_3
value: 87.372
- type: mrr_at_5
value: 87.995
- type: ndcg_at_1
value: 82.27
- type: ndcg_at_10
value: 89.023
- type: ndcg_at_100
value: 90.191
- type: ndcg_at_1000
value: 90.266
- type: ndcg_at_3
value: 86.37
- type: ndcg_at_5
value: 87.804
- type: precision_at_1
value: 82.27
- type: precision_at_10
value: 13.469000000000001
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.797
- type: precision_at_5
value: 24.734
- type: recall_at_1
value: 71.489
- type: recall_at_10
value: 95.824
- type: recall_at_100
value: 99.70599999999999
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 88.099
- type: recall_at_5
value: 92.285
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.52398807444541
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.34855891507871
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.188000000000001
- type: map_at_10
value: 13.987
- type: map_at_100
value: 16.438
- type: map_at_1000
value: 16.829
- type: map_at_3
value: 9.767000000000001
- type: map_at_5
value: 11.912
- type: mrr_at_1
value: 25.6
- type: mrr_at_10
value: 37.744
- type: mrr_at_100
value: 38.847
- type: mrr_at_1000
value: 38.894
- type: mrr_at_3
value: 34.166999999999994
- type: mrr_at_5
value: 36.207
- type: ndcg_at_1
value: 25.6
- type: ndcg_at_10
value: 22.980999999999998
- type: ndcg_at_100
value: 32.039
- type: ndcg_at_1000
value: 38.157000000000004
- type: ndcg_at_3
value: 21.567
- type: ndcg_at_5
value: 19.070999999999998
- type: precision_at_1
value: 25.6
- type: precision_at_10
value: 12.02
- type: precision_at_100
value: 2.5100000000000002
- type: precision_at_1000
value: 0.396
- type: precision_at_3
value: 20.333000000000002
- type: precision_at_5
value: 16.98
- type: recall_at_1
value: 5.188000000000001
- type: recall_at_10
value: 24.372
- type: recall_at_100
value: 50.934999999999995
- type: recall_at_1000
value: 80.477
- type: recall_at_3
value: 12.363
- type: recall_at_5
value: 17.203
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 87.24286275535398
- type: cos_sim_spearman
value: 82.62333770991818
- type: euclidean_pearson
value: 84.60353717637284
- type: euclidean_spearman
value: 82.32990108810047
- type: manhattan_pearson
value: 84.6089049738196
- type: manhattan_spearman
value: 82.33361785438936
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.87428858503165
- type: cos_sim_spearman
value: 79.09145886519929
- type: euclidean_pearson
value: 86.42669231664036
- type: euclidean_spearman
value: 80.03127375435449
- type: manhattan_pearson
value: 86.41330338305022
- type: manhattan_spearman
value: 80.02492538673368
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.67912277322645
- type: cos_sim_spearman
value: 89.6171319711762
- type: euclidean_pearson
value: 86.56571917398725
- type: euclidean_spearman
value: 87.71216907898948
- type: manhattan_pearson
value: 86.57459050182473
- type: manhattan_spearman
value: 87.71916648349993
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.71957379085862
- type: cos_sim_spearman
value: 85.01784075851465
- type: euclidean_pearson
value: 84.7407848472801
- type: euclidean_spearman
value: 84.61063091345538
- type: manhattan_pearson
value: 84.71494352494403
- type: manhattan_spearman
value: 84.58772077604254
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.40508326325175
- type: cos_sim_spearman
value: 89.50912897763186
- type: euclidean_pearson
value: 87.82349070086627
- type: euclidean_spearman
value: 88.44179162727521
- type: manhattan_pearson
value: 87.80181927025595
- type: manhattan_spearman
value: 88.43205129636243
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.35846741715478
- type: cos_sim_spearman
value: 86.61172476741842
- type: euclidean_pearson
value: 84.60123125491637
- type: euclidean_spearman
value: 85.3001948141827
- type: manhattan_pearson
value: 84.56231142658329
- type: manhattan_spearman
value: 85.23579900798813
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.94539129818824
- type: cos_sim_spearman
value: 88.99349064256742
- type: euclidean_pearson
value: 88.7142444640351
- type: euclidean_spearman
value: 88.34120813505011
- type: manhattan_pearson
value: 88.70363008238084
- type: manhattan_spearman
value: 88.31952816956954
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.29910260369893
- type: cos_sim_spearman
value: 68.79263346213466
- type: euclidean_pearson
value: 68.41627521422252
- type: euclidean_spearman
value: 66.61602587398579
- type: manhattan_pearson
value: 68.49402183447361
- type: manhattan_spearman
value: 66.80157792354453
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.43703906343708
- type: cos_sim_spearman
value: 89.06081805093662
- type: euclidean_pearson
value: 87.48311456299662
- type: euclidean_spearman
value: 88.07417597580013
- type: manhattan_pearson
value: 87.48202249768894
- type: manhattan_spearman
value: 88.04758031111642
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.49080620485203
- type: mrr
value: 96.19145378949301
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.317
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.738
- type: map_at_1000
value: 69.759
- type: map_at_3
value: 66.12599999999999
- type: map_at_5
value: 67.532
- type: mrr_at_1
value: 62
- type: mrr_at_10
value: 70.176
- type: mrr_at_100
value: 70.565
- type: mrr_at_1000
value: 70.583
- type: mrr_at_3
value: 67.833
- type: mrr_at_5
value: 68.93299999999999
- type: ndcg_at_1
value: 62
- type: ndcg_at_10
value: 74.069
- type: ndcg_at_100
value: 76.037
- type: ndcg_at_1000
value: 76.467
- type: ndcg_at_3
value: 68.628
- type: ndcg_at_5
value: 70.57600000000001
- type: precision_at_1
value: 62
- type: precision_at_10
value: 10
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.667
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 59.317
- type: recall_at_10
value: 87.822
- type: recall_at_100
value: 96.833
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 73.06099999999999
- type: recall_at_5
value: 77.928
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.88910891089108
- type: cos_sim_ap
value: 97.236958456951
- type: cos_sim_f1
value: 94.39999999999999
- type: cos_sim_precision
value: 94.39999999999999
- type: cos_sim_recall
value: 94.39999999999999
- type: dot_accuracy
value: 99.82574257425742
- type: dot_ap
value: 94.94344759441888
- type: dot_f1
value: 91.17352056168507
- type: dot_precision
value: 91.44869215291752
- type: dot_recall
value: 90.9
- type: euclidean_accuracy
value: 99.88415841584158
- type: euclidean_ap
value: 97.2044250782305
- type: euclidean_f1
value: 94.210786739238
- type: euclidean_precision
value: 93.24191968658178
- type: euclidean_recall
value: 95.19999999999999
- type: manhattan_accuracy
value: 99.88613861386139
- type: manhattan_ap
value: 97.20683205497689
- type: manhattan_f1
value: 94.2643391521197
- type: manhattan_precision
value: 94.02985074626866
- type: manhattan_recall
value: 94.5
- type: max_accuracy
value: 99.88910891089108
- type: max_ap
value: 97.236958456951
- type: max_f1
value: 94.39999999999999
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.53940781726187
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.71865011295108
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.3218674533331
- type: mrr
value: 56.28279910449028
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.723915667479673
- type: cos_sim_spearman
value: 32.029070449745234
- type: dot_pearson
value: 28.864944212481454
- type: dot_spearman
value: 27.939266999596725
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.231
- type: map_at_10
value: 1.949
- type: map_at_100
value: 10.023
- type: map_at_1000
value: 23.485
- type: map_at_3
value: 0.652
- type: map_at_5
value: 1.054
- type: mrr_at_1
value: 86
- type: mrr_at_10
value: 92.067
- type: mrr_at_100
value: 92.067
- type: mrr_at_1000
value: 92.067
- type: mrr_at_3
value: 91.667
- type: mrr_at_5
value: 92.067
- type: ndcg_at_1
value: 83
- type: ndcg_at_10
value: 76.32900000000001
- type: ndcg_at_100
value: 54.662
- type: ndcg_at_1000
value: 48.062
- type: ndcg_at_3
value: 81.827
- type: ndcg_at_5
value: 80.664
- type: precision_at_1
value: 86
- type: precision_at_10
value: 80
- type: precision_at_100
value: 55.48
- type: precision_at_1000
value: 20.938000000000002
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84.39999999999999
- type: recall_at_1
value: 0.231
- type: recall_at_10
value: 2.158
- type: recall_at_100
value: 13.344000000000001
- type: recall_at_1000
value: 44.31
- type: recall_at_3
value: 0.6779999999999999
- type: recall_at_5
value: 1.13
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.524
- type: map_at_10
value: 10.183
- type: map_at_100
value: 16.625
- type: map_at_1000
value: 18.017
- type: map_at_3
value: 5.169
- type: map_at_5
value: 6.772
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 47.128
- type: mrr_at_100
value: 48.458
- type: mrr_at_1000
value: 48.473
- type: mrr_at_3
value: 44.897999999999996
- type: mrr_at_5
value: 45.306000000000004
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 24.928
- type: ndcg_at_100
value: 37.613
- type: ndcg_at_1000
value: 48.528
- type: ndcg_at_3
value: 28.829
- type: ndcg_at_5
value: 25.237
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.448999999999998
- type: precision_at_100
value: 8.02
- type: precision_at_1000
value: 1.537
- type: precision_at_3
value: 30.612000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.524
- type: recall_at_10
value: 16.38
- type: recall_at_100
value: 49.529
- type: recall_at_1000
value: 83.598
- type: recall_at_3
value: 6.411
- type: recall_at_5
value: 8.932
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.09020000000001
- type: ap
value: 14.451710060978993
- type: f1
value: 54.7874410609049
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.745331069609506
- type: f1
value: 60.08387848592697
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.71549485462037
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.39345532574357
- type: cos_sim_ap
value: 78.16796549696478
- type: cos_sim_f1
value: 71.27713276123171
- type: cos_sim_precision
value: 68.3115626511853
- type: cos_sim_recall
value: 74.51187335092348
- type: dot_accuracy
value: 85.12248912201228
- type: dot_ap
value: 69.26039256107077
- type: dot_f1
value: 65.04294321240867
- type: dot_precision
value: 63.251059586138126
- type: dot_recall
value: 66.93931398416886
- type: euclidean_accuracy
value: 87.07754664123503
- type: euclidean_ap
value: 77.7872176038945
- type: euclidean_f1
value: 70.85587801278899
- type: euclidean_precision
value: 66.3519115614924
- type: euclidean_recall
value: 76.01583113456465
- type: manhattan_accuracy
value: 87.07754664123503
- type: manhattan_ap
value: 77.7341400185556
- type: manhattan_f1
value: 70.80310880829015
- type: manhattan_precision
value: 69.54198473282443
- type: manhattan_recall
value: 72.1108179419525
- type: max_accuracy
value: 87.39345532574357
- type: max_ap
value: 78.16796549696478
- type: max_f1
value: 71.27713276123171
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.09457833663213
- type: cos_sim_ap
value: 86.33024314706873
- type: cos_sim_f1
value: 78.59623733719248
- type: cos_sim_precision
value: 74.13322413322413
- type: cos_sim_recall
value: 83.63104404065291
- type: dot_accuracy
value: 88.3086894089339
- type: dot_ap
value: 83.92225241805097
- type: dot_f1
value: 76.8721826377781
- type: dot_precision
value: 72.8168044077135
- type: dot_recall
value: 81.40591315060055
- type: euclidean_accuracy
value: 88.77052043311213
- type: euclidean_ap
value: 85.7410710218755
- type: euclidean_f1
value: 77.97705489398781
- type: euclidean_precision
value: 73.77713657598241
- type: euclidean_recall
value: 82.68401601478288
- type: manhattan_accuracy
value: 88.73753250281368
- type: manhattan_ap
value: 85.72867199072802
- type: manhattan_f1
value: 77.89774182922812
- type: manhattan_precision
value: 74.23787931635857
- type: manhattan_recall
value: 81.93717277486911
- type: max_accuracy
value: 89.09457833663213
- type: max_ap
value: 86.33024314706873
- type: max_f1
value: 78.59623733719248
---
# agier9/UAE-Large-V1-Q5_K_S-GGUF
This model was converted to GGUF format from [`WhereIsAI/UAE-Large-V1`](https://huggingface.co/WhereIsAI/UAE-Large-V1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/WhereIsAI/UAE-Large-V1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo agier9/UAE-Large-V1-Q5_K_S-GGUF --model uae-large-v1-q5_k_s.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo agier9/UAE-Large-V1-Q5_K_S-GGUF --model uae-large-v1-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m uae-large-v1-q5_k_s.gguf -n 128
```
|
DiederikMartens/eBERT_sa_cv_11_fold9 | DiederikMartens | 2024-05-28T05:30:21Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:22:48Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold9
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5201
- F1: 0.5980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5013 | 0.4869 |
| No log | 2.0 | 452 | 0.4300 | 0.5114 |
| 0.4923 | 3.0 | 678 | 0.5201 | 0.5980 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_11_fold9 | DiederikMartens | 2024-05-28T05:28:48Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:17:54Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_11_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_11_fold9
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3356
- F1: 0.7731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3141 | 0.5562 |
| No log | 2.0 | 452 | 0.2876 | 0.7223 |
| 0.346 | 3.0 | 678 | 0.3356 | 0.7731 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
canho/koalpaca-5.8b-emojis-5epochs-final | canho | 2024-05-28T05:27:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T05:27:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sidvash/famus_exh_task2_unsloth_llama-3-8b-Instruct-bnb-4bit | sidvash | 2024-05-28T05:24:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T05:24:11Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** sidvash
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DiederikMartens/gBERT_sa_cv_11_fold9 | DiederikMartens | 2024-05-28T05:21:25Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:08:43Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_11_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_11_fold9
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4346
- F1: 0.7063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3577 | 0.6578 |
| No log | 2.0 | 452 | 0.3322 | 0.6631 |
| 0.341 | 3.0 | 678 | 0.4346 | 0.7063 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DownwardSpiral33/gpt2-imdb-pos-roberta16-256_0_05-full-2024.05.28.04.18 | DownwardSpiral33 | 2024-05-28T05:20:37Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T05:20:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DiederikMartens/tsBERT_sa_cv_11_fold8 | DiederikMartens | 2024-05-28T05:17:48Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T05:04:27Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_11_fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_11_fold8
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4735
- F1: 0.6727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3932 | 0.5292 |
| No log | 2.0 | 452 | 0.4206 | 0.6498 |
| 0.3211 | 3.0 | 678 | 0.4735 | 0.6727 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
enochs/diamond_predictions | enochs | 2024-05-28T05:14:38Z | 0 | 1 | null | [
"joblib",
"dataset:enochs/rvm",
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2024-05-28T03:40:41Z | ---
license: mit
datasets:
- enochs/rvm
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
opengait/OpenGait | opengait | 2024-05-28T05:14:28Z | 0 | 3 | null | [
"arxiv:2211.06597",
"region:us"
]
| null | 2023-10-12T07:41:06Z | # OpenGait Checkpoint Repo
Github link:
https://github.com/ShiqiYu/OpenGait
Paper link:
https://arxiv.org/abs/2211.06597 |
frederickwang99/soayllama_v2_7b | frederickwang99 | 2024-05-28T05:14:12Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"dataset:frederickwang99/SoAyBench",
"arxiv:2405.15165",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T08:35:43Z | ---
license: mit
datasets:
- frederickwang99/SoAyBench
language:
- zh
- en
---
# SoAyLLaMA-7B
[Paper](https://arxiv.org/pdf/2405.15165)
[Code](https://github.com/WangYC-99/SoAyLLaMA_Training?tab=readme-ov-file)
[System](https://soay.aminer.cn/)
## Overview
SOAYLLaMA is designed for supervised finetuning of open-source models when training resources are available. Through the training stage,SOAYLLaMA integrates an API description and queries as inputs and combines solutions and programs as responses. During the inference stage, SOAYLLaMA concurrently completes the Planning and Formatting processes to derive solutions and code. Similar to SOAYGPT, it utilizes the environment to execute and obtain answers based on K.

We conduct the fine-tuning on 3 different variants of LLaMA Model, respectively Chat-7B: Llama-2-7b-chat-hf2, Code-7B: CodeLlama-7b-Instruct-hf3,Code-13B: CodeLlama-13b-Instruct-hf4.
Here we release Code-7B as SoAyLLaMA-7B.
## Try SoAyLLaMA On!
You can find the detailed usage at [Github Repo](https://github.com/WangYC-99/SoAyLLaMA_Training?tab=readme-ov-file).
The model is trained on a specific format of "[INST]" + bg_text + prompt + "\n-----\n[/INST]"
So if you want to download the model from hugginface and use it to generate AMiner-API-Calling code, you may use the function below:
```
def code_gen_llm(prompts, llm):
bg_text = "Background:\n\nHere are some tool functions you can use. Each function returns a dict or a list of dict.\n------\nsearchPerson(name, organization, interest):\n person_list = [{'person_id': str, 'name': str, 'num_citation': int, 'interests':list of str, 'num_pubs': int, 'organization': str}, {...}]\n return person_list\n---\nsearchPublication(publication_info)\n publication_list = [{'pub_id': str, 'title': str, 'year': time}, {...}]\n return publication_list\n---\ngetCoauthors(person_id):\n coauthors_list = [{'person_id': str, 'name': str, 'relation': ['advisor' or 'advisee' or 'coauthor']}, {...}]\n return coauthors_list\n---\ngetPersonInterest(person_id):\n return interest_list\n---\ngetPersonPubs(person_id):\n publication_list = [{'authors_name_list':list of str, 'pub_id':str, 'title':str, 'num_citation':int, 'year':str}]\n return publication_list\n---\ngetPersonBasicInfo(person_id):\n person_basic_info = {'person_id': str, 'name': str, 'gender': str, 'organization':str , 'position': str, 'bio': str, 'education_experience': str, 'email': str}\n return person_basic_info\n---\ngetPublication(pub_id)\n publication_info = {'abstract' : str, 'author_list': [{'person_id': str, 'name': str, 'organization': str}, {...}], 'num_citation': int, 'year' : int, 'pdf_link': str, 'venue' : str}\n return publication_info\n------\nYou are given a query. Parse the query into a combination of the given query and write python codes in order to solve it. \nNote that the result must be one of these combination candidates:\nsearhPerson\nsearchPublication\nsearchPerson -> getCoauthors\nsearchPerson -> getPublication\nsearchPerson -> getPersonBasicInfo\nsearchPublication -> getPublication\nsearchPerson -> getCoauthors -> searchPerson\nsearchPerson -> getCoauthors -> getCoauthors\nsearchPerson -> getCoauthors -> getPersonInterest\nsearchPerson -> getPersonPubs -> getPublication\nsearchPublication -> getPublication -> getPersonInterest\nsearchPublication -> getPublication -> getCoauthors\nsearchPublication -> getPublication -> getPersonPubs\nsearchPublication -> getPublication -> getPersonBasicInfo\nsearchPublication -> getPublication -> searchPerson\n\n-----\nQuery:\n"
new_prompts = []
for prompt in prompts:
new_prompt = "[INST]" + bg_text + prompt + "\n-----\n[/INST]"
new_prompts.append(new_prompt)
sampling_params = SamplingParams(temperature=0, top_p=0.95, max_tokens=512)
bg_time = time.time()
outputs = llm.generate(new_prompts, sampling_params)
generated_txt_list = []
for each in outputs:
generated_txt_list.append(each.outputs[0].text)
end_time = time.time()
``` |
DiederikMartens/eBERT_sa_cv_11_fold7 | DiederikMartens | 2024-05-28T05:09:06Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:54:59Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold7
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5195
- F1: 0.5523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5056 | 0.4396 |
| No log | 2.0 | 452 | 0.5157 | 0.4881 |
| 0.4883 | 3.0 | 678 | 0.5195 | 0.5523 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/gBERT_sa_cv_11_fold8 | DiederikMartens | 2024-05-28T05:08:38Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:56:16Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_11_fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_11_fold8
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5125
- F1: 0.6646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3892 | 0.5595 |
| No log | 2.0 | 452 | 0.4334 | 0.6250 |
| 0.3272 | 3.0 | 678 | 0.5125 | 0.6646 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AdnanRiaz107/CodePhi-3-mini-4k-instruct-pythonAPPS | AdnanRiaz107 | 2024-05-28T05:07:59Z | 4 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"generated_from_trainer",
"custom_code",
"base_model:AdnanRiaz107/CodePhi-3-mini-4k-instruct-python",
"base_model:adapter:AdnanRiaz107/CodePhi-3-mini-4k-instruct-python",
"license:mit",
"region:us"
]
| null | 2024-05-27T23:01:49Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: AdnanRiaz107/CodePhi-3-mini-4k-instruct-python
model-index:
- name: CodePhi-3-mini-4k-instruct-pythonAPPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodePhi-3-mini-4k-instruct-pythonAPPS
This model is a fine-tuned version of [AdnanRiaz107/CodePhi-3-mini-4k-instruct-python](https://huggingface.co/AdnanRiaz107/CodePhi-3-mini-4k-instruct-python) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5844 | 0.0833 | 100 | 0.6866 |
| 0.6441 | 0.1667 | 200 | 0.6737 |
| 0.6551 | 0.25 | 300 | 0.6658 |
| 0.5858 | 0.3333 | 400 | 0.6605 |
| 0.6136 | 0.4167 | 500 | 0.6569 |
| 0.5982 | 0.5 | 600 | 0.6546 |
| 0.6 | 0.5833 | 700 | 0.6531 |
| 0.5609 | 0.6667 | 800 | 0.6525 |
| 0.5824 | 0.75 | 900 | 0.6523 |
| 0.538 | 0.8333 | 1000 | 0.6523 |
| 0.6339 | 0.9167 | 1100 | 0.6523 |
| 0.6138 | 1.0 | 1200 | 0.6522 |
### Framework versions
- PEFT 0.11.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
DiederikMartens/tsBERT_sa_cv_11_fold7 | DiederikMartens | 2024-05-28T05:04:22Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:50:56Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_11_fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_11_fold7
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3814
- F1: 0.7077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3252 | 0.5638 |
| No log | 2.0 | 452 | 0.3258 | 0.6246 |
| 0.3407 | 3.0 | 678 | 0.3814 | 0.7077 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlphaCausal/llama-3-8b-Instruct-bnb-4bit-AlphaCausal-demo1 | AlphaCausal | 2024-05-28T04:55:37Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-05-28T04:53:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AlphaCausal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DiederikMartens/eBERT_sa_cv_11_fold6 | DiederikMartens | 2024-05-28T04:54:55Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:40:36Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold6
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4504
- F1: 0.5502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4674 | 0.4615 |
| No log | 2.0 | 452 | 0.4394 | 0.5035 |
| 0.49 | 3.0 | 678 | 0.4504 | 0.5502 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
athmikha/phi2-spelling | athmikha | 2024-05-28T04:52:49Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-05-28T04:49:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ifyou819/summary-pumed-dataset-4 | ifyou819 | 2024-05-28T04:52:34Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:ifyou819/summary-pumed-dataset-3",
"base_model:finetune:ifyou819/summary-pumed-dataset-3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-05-28T04:50:57Z | ---
base_model: ifyou819/summary-pumed-dataset-3
tags:
- generated_from_trainer
model-index:
- name: summary-pumed-dataset-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summary-pumed-dataset-4
This model is a fine-tuned version of [ifyou819/summary-pumed-dataset-3](https://huggingface.co/ifyou819/summary-pumed-dataset-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7499 | 1.0 | 1948 | 4.9432 |
| 5.3331 | 2.0 | 3896 | 4.5836 |
| 5.1375 | 3.0 | 5844 | 4.4728 |
| 5.0486 | 4.0 | 7792 | 4.4505 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Chituyi7/llama3.1-8B-4Bit-InstructionTuned-OIG | Chituyi7 | 2024-05-28T04:50:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-05-28T00:55:54Z | ---
license: apache-2.0
---
|
DiederikMartens/mBERT_sa_cv_11_fold6 | DiederikMartens | 2024-05-28T04:50:15Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:36:52Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_11_fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_11_fold6
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4409
- F1: 0.6517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4253 | 0.5233 |
| No log | 2.0 | 452 | 0.4337 | 0.5742 |
| 0.4401 | 3.0 | 678 | 0.4409 | 0.6517 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
khangmacon/cyllama3 | khangmacon | 2024-05-28T04:48:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:khangmacon/llmtrain",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"model-index",
"region:us"
]
| null | 2024-05-07T08:35:09Z | ---
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- khangmacon/llmtrain
metrics:
- accuracy
model-index:
- name: cyllama3
results:
- task:
type: text-generation
name: Causal Language Modeling
dataset:
name: khangmacon/llmtrain
type: khangmacon/llmtrain
metrics:
- type: accuracy
value: 0.5590444975644216
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cyllama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the khangmacon/llmtrain dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9930
- Accuracy: 0.5590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.2432 | 0.01 | 500 | 2.1239 | 0.5358 |
| 2.209 | 0.02 | 1000 | 2.0922 | 0.5404 |
| 2.1988 | 0.03 | 1500 | 2.0742 | 0.5436 |
| 2.1877 | 0.04 | 2000 | 2.0615 | 0.5463 |
| 2.1743 | 0.05 | 2500 | 2.0514 | 0.5479 |
| 2.1885 | 0.06 | 3000 | 2.0427 | 0.5495 |
| 2.1883 | 0.07 | 3500 | 2.0355 | 0.5509 |
| 2.1954 | 0.08 | 4000 | 2.0298 | 0.5519 |
| 2.1597 | 0.09 | 4500 | 2.0254 | 0.5526 |
| 2.1763 | 0.1 | 5000 | 2.0222 | 0.5532 |
| 2.1413 | 0.11 | 5500 | 2.0195 | 0.5541 |
| 2.1812 | 0.12 | 6000 | 2.0169 | 0.5545 |
| 2.1526 | 0.14 | 6500 | 2.0148 | 0.5547 |
| 2.155 | 0.15 | 7000 | 2.0131 | 0.5554 |
| 2.1594 | 0.16 | 7500 | 2.0110 | 0.5558 |
| 2.1681 | 0.17 | 8000 | 2.0097 | 0.5559 |
| 2.1572 | 0.18 | 8500 | 2.0083 | 0.5562 |
| 2.0943 | 0.19 | 9000 | 2.0074 | 0.5566 |
| 2.1421 | 0.2 | 9500 | 2.0063 | 0.5566 |
| 2.1196 | 0.21 | 10000 | 2.0049 | 0.5568 |
| 2.1634 | 0.22 | 10500 | 2.0042 | 0.5568 |
| 2.1361 | 0.23 | 11000 | 2.0035 | 0.5573 |
| 2.1614 | 0.24 | 11500 | 2.0027 | 0.5572 |
| 2.1205 | 0.25 | 12000 | 2.0021 | 0.5576 |
| 2.0984 | 0.26 | 12500 | 2.0011 | 0.5576 |
| 2.1226 | 0.27 | 13000 | 2.0006 | 0.5575 |
| 2.1054 | 0.28 | 13500 | 2.0001 | 0.5577 |
| 2.1297 | 0.29 | 14000 | 1.9997 | 0.5578 |
| 2.1233 | 0.3 | 14500 | 1.9988 | 0.5581 |
| 2.1348 | 0.31 | 15000 | 1.9984 | 0.5581 |
| 2.1494 | 0.32 | 15500 | 1.9980 | 0.5582 |
| 2.0827 | 0.33 | 16000 | 1.9976 | 0.5584 |
| 2.0991 | 0.34 | 16500 | 1.9975 | 0.5582 |
| 2.1108 | 0.35 | 17000 | 1.9972 | 0.5582 |
| 2.1209 | 0.36 | 17500 | 1.9968 | 0.5583 |
| 2.1012 | 0.37 | 18000 | 1.9963 | 0.5584 |
| 2.1155 | 0.38 | 18500 | 1.9959 | 0.5585 |
| 2.1493 | 0.4 | 19000 | 1.9956 | 0.5585 |
| 2.1219 | 0.41 | 19500 | 1.9953 | 0.5587 |
| 2.1584 | 0.42 | 20000 | 1.9952 | 0.5588 |
| 2.1167 | 0.43 | 20500 | 1.9950 | 0.5587 |
| 2.1507 | 0.44 | 21000 | 1.9948 | 0.5586 |
| 2.1043 | 0.45 | 21500 | 1.9946 | 0.5587 |
| 2.0864 | 0.46 | 22000 | 1.9945 | 0.5587 |
| 2.1074 | 0.47 | 22500 | 1.9943 | 0.5587 |
| 2.0858 | 0.48 | 23000 | 1.9942 | 0.5590 |
| 2.1178 | 0.49 | 23500 | 1.9941 | 0.5588 |
| 2.1148 | 0.5 | 24000 | 1.9940 | 0.5588 |
| 2.1165 | 0.51 | 24500 | 1.9939 | 0.5588 |
| 2.1012 | 0.52 | 25000 | 1.9938 | 0.5590 |
| 2.1573 | 0.53 | 25500 | 1.9936 | 0.5590 |
| 2.1674 | 0.54 | 26000 | 1.9936 | 0.5589 |
| 2.1184 | 0.55 | 26500 | 1.9935 | 0.5590 |
| 2.1424 | 0.56 | 27000 | 1.9935 | 0.5590 |
| 2.1437 | 0.57 | 27500 | 1.9935 | 0.5590 |
| 2.1244 | 0.58 | 28000 | 1.9933 | 0.5591 |
| 2.0767 | 0.59 | 28500 | 1.9933 | 0.5589 |
| 2.1182 | 0.6 | 29000 | 1.9934 | 0.5591 |
| 2.1277 | 0.61 | 29500 | 1.9933 | 0.5591 |
| 2.1407 | 0.62 | 30000 | 1.9932 | 0.5591 |
| 2.1222 | 0.63 | 30500 | 1.9932 | 0.5591 |
| 2.1146 | 0.64 | 31000 | 1.9931 | 0.5591 |
| 2.1441 | 0.65 | 31500 | 1.9932 | 0.5591 |
| 2.1224 | 0.67 | 32000 | 1.9931 | 0.5590 |
| 2.0878 | 0.68 | 32500 | 1.9932 | 0.5591 |
| 2.1172 | 0.69 | 33000 | 1.9932 | 0.5590 |
| 2.1166 | 0.7 | 33500 | 1.9931 | 0.5592 |
| 2.1054 | 0.71 | 34000 | 1.9931 | 0.5591 |
| 2.0972 | 0.72 | 34500 | 1.9931 | 0.5590 |
| 2.1228 | 0.73 | 35000 | 1.9931 | 0.5590 |
| 2.1231 | 0.74 | 35500 | 1.9931 | 0.5592 |
| 2.0974 | 0.75 | 36000 | 1.9931 | 0.5590 |
| 2.1025 | 0.76 | 36500 | 1.9931 | 0.5591 |
| 2.1217 | 0.77 | 37000 | 1.9931 | 0.5590 |
| 2.1227 | 0.78 | 37500 | 1.9930 | 0.5591 |
| 2.1272 | 0.79 | 38000 | 1.9931 | 0.5592 |
| 2.117 | 0.8 | 38500 | 1.9931 | 0.5591 |
| 2.1325 | 0.81 | 39000 | 1.9931 | 0.5591 |
| 2.1046 | 0.82 | 39500 | 1.9930 | 0.5591 |
| 2.1096 | 0.83 | 40000 | 1.9930 | 0.5591 |
| 2.1149 | 0.84 | 40500 | 1.9931 | 0.5591 |
| 2.122 | 0.85 | 41000 | 1.9931 | 0.5591 |
| 2.1137 | 0.86 | 41500 | 1.9931 | 0.5591 |
| 2.0983 | 0.87 | 42000 | 1.9930 | 0.5590 |
| 2.1109 | 0.88 | 42500 | 1.9931 | 0.5591 |
| 2.172 | 0.89 | 43000 | 1.9930 | 0.5590 |
| 2.0882 | 0.9 | 43500 | 1.9930 | 0.5591 |
| 2.0646 | 0.91 | 44000 | 1.9930 | 0.5591 |
| 2.1223 | 0.93 | 44500 | 1.9930 | 0.5591 |
| 2.1342 | 0.94 | 45000 | 1.9930 | 0.5591 |
| 2.0991 | 0.95 | 45500 | 1.9930 | 0.5590 |
| 2.1431 | 0.96 | 46000 | 1.9930 | 0.5592 |
| 2.0965 | 0.97 | 46500 | 1.9931 | 0.5590 |
| 2.1377 | 0.98 | 47000 | 1.9931 | 0.5592 |
| 2.1118 | 0.99 | 47500 | 1.9931 | 0.5592 |
| 2.089 | 1.0 | 48000 | 1.9930 | 0.5590 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.39.3
- Pytorch 2.2.0
- Datasets 2.18.0
- Tokenizers 0.15.2 |
jojo-ai-mst/BurmeseBert | jojo-ai-mst | 2024-05-28T04:42:27Z | 128 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"my",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-05-28T04:24:13Z | ---
library_name: transformers
language:
- my
- en
---
# Burmese-Bert
Burmese-Bert is a Bilingual Mask Language Model based on "bert-large-uncased".
The architecture is based on bidirectional encoder representations from transformers.
Supports English and Burmese language.
## Model Details
Coming Soon
### Model Description
- **Developed by:** Min Si Thu
- **Model type:** bidirectional encoder representations from transformers
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
- Mask Filling Language Model
- Burmese Natural Language Understanding
### How to use
```shell
# install the dependencies
pip install transformers
```
```python
from transformers import AutoModelForMaskedLM,AutoTokenizer
model_checkpoint = "jojo-ai-mst/BurmeseBert"
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
text = "This is a great [MASK]."
import torch
inputs = tokenizer(text, return_tensors="pt")
token_logits = model(**inputs).logits
# Find the location of [MASK] and extract its logits
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
mask_token_logits = token_logits[0, mask_token_index, :]
# Pick the [MASK] candidates with the highest logits
top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
for token in top_5_tokens:
print(f"'>>> {text.replace(tokenizer.mask_token, tokenizer.decode([token]))}'")
```
## Citation [optional]
Coming Soon |
DiederikMartens/eBERT_sa_cv_11_fold5 | DiederikMartens | 2024-05-28T04:40:32Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:26:16Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold5
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5776
- F1: 0.5185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5329 | 0.4095 |
| No log | 2.0 | 452 | 0.4614 | 0.5122 |
| 0.4859 | 3.0 | 678 | 0.5776 | 0.5185 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_11_fold5 | DiederikMartens | 2024-05-28T04:37:16Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:23:56Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_11_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_11_fold5
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4779
- F1: 0.6529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4254 | 0.5185 |
| No log | 2.0 | 452 | 0.3887 | 0.6111 |
| 0.3349 | 3.0 | 678 | 0.4779 | 0.6529 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
yrju/ultra_llm_merged | yrju | 2024-05-28T04:35:41Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"arxiv:2306.01708",
"base_model:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:merge:codellama/CodeLlama-7b-Instruct-hf",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T02:48:37Z | ---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
- WizardLM/WizardMath-7B-V1.1
- codellama/CodeLlama-7b-Instruct-hf
library_name: transformers
tags:
- mergekit
- merge
---
# ultra_llm_merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
- layer_range: [0, 32]
model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 32]
model: codellama/CodeLlama-7b-Instruct-hf
parameters:
density: 0.5
weight: 0.5
```
|
openbmb/RLHF-V | openbmb | 2024-05-28T04:31:21Z | 40 | 15 | transformers | [
"transformers",
"pytorch",
"beit3_llava",
"text-generation",
"en",
"dataset:Yirany/UniMM-Chat",
"dataset:HaoyeZhang/RLHF-V-Dataset",
"arxiv:2312.00849",
"arxiv:2405.17220",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-27T03:42:11Z | ---
license: apache-2.0
datasets:
- Yirany/UniMM-Chat
- HaoyeZhang/RLHF-V-Dataset
language:
- en
library_name: transformers
---
# Model Card for RLHF-V
[Project Page](https://rlhf-v.github.io/) | [GitHub ](https://github.com/RLHF-V/RLHF-V) | [Demo](http://120.92.209.146:8081/) | [Paper](https://arxiv.org/abs/2312.00849)
## News
* [2024.05.28] 📃 Our RLAIF-V paper is accesible at [arxiv](https://arxiv.org/abs/2405.17220) now!
* [2024.05.20] 🎉 We introduce [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), our new alignment framework that utilize open-source models for feedback generation and reach **super GPT-4V trustworthiness**. You can download the corresponding [dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset) and models ([7B](https://huggingface.co/openbmb/RLAIF-V-7B), [12B](https://huggingface.co/openbmb/RLAIF-V-12B)) now!
* [2024.04.11] 🔥 Our data is used in [MiniCPM-V 2.0](https://huggingface.co/openbmb/MiniCPM-V-2), an **end-side** multimodal large language model that exhibits **comparable trustworthiness with GPT-4V**!
## Brief Introduction
RLHF-V is an open-source multimodal large language model with the **lowest hallucination rate** on both long-form instructions and short-form questions.
RLHF-V is trained on [RLHF-V-Dataset](https://huggingface.co/datasets/HaoyeZhang/RLHF-V-Dataset), which contains **fine-grained segment-level human corrections** on diverse instructions. The base model is trained on [UniMM-Chat](https://huggingface.co/datasets/Yirany/UniMM-Chat), which is a high-quality knowledge-intensive SFT dataset. We introduce a new method **Dense Direct Preference Optimization (DDPO)** that can make better use of the fine-grained annotations.
For more details, please refer to our [paper](https://arxiv.org/abs/2312.00849).

## Model Details
### Model Description
- **Trained from model:** Vicuna-13B
- **Trained on data:** [RLHF-V-Dataset](https://huggingface.co/datasets/HaoyeZhang/RLHF-V-Dataset)
### Model Sources
- **Project Page:** https://rlhf-v.github.io
- **GitHub Repository:** https://github.com/RLHF-V/RLHF-V
- **Demo:** http://120.92.209.146:8081
- **Paper:** https://arxiv.org/abs/2312.00849
## Performance
Low hallucination rate while being informative:

More resistant to over-generalization, even compared to GPT-4V:

## Citation
If you find this work helpful, please consider cite our papers 📝:
```bibtex
@article{yu2023rlhf,
title={Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback},
author={Yu, Tianyu and Yao, Yuan and Zhang, Haoye and He, Taiwen and Han, Yifeng and Cui, Ganqu and Hu, Jinyi and Liu, Zhiyuan and Zheng, Hai-Tao and Sun, Maosong and others},
journal={arXiv preprint arXiv:2312.00849},
year={2023}
}
@article{yu2024rlaifv,
title={RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness},
author={Yu, Tianyu and Zhang, Haoye and Yao, Yuan and Dang, Yunkai and Chen, Da and Lu, Xiaoman and Cui, Ganqu and He, Taiwen and Liu, Zhiyuan and Chua, Tat-Seng and Sun, Maosong},
journal={arXiv preprint arXiv:2405.17220},
year={2024},
}
``` |
DiederikMartens/gBERT_sa_cv_11_fold5 | DiederikMartens | 2024-05-28T04:31:03Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:18:33Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_11_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_11_fold5
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5662
- F1: 0.6406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4246 | 0.5300 |
| No log | 2.0 | 452 | 0.4780 | 0.6363 |
| 0.3451 | 3.0 | 678 | 0.5662 | 0.6406 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vipinkatara/mistral-7b-sft-full | vipinkatara | 2024-05-28T04:30:21Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T12:00:44Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: mistral-7b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-sft-full
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0295 | 1.0 | 17429 | 1.0397 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_11_fold4 | DiederikMartens | 2024-05-28T04:23:51Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:10:12Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_11_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_11_fold4
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4937
- F1: 0.6986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3932 | 0.5830 |
| No log | 2.0 | 452 | 0.4407 | 0.6642 |
| 0.3311 | 3.0 | 678 | 0.4937 | 0.6986 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/mBERT_sa_cv_11_fold4 | DiederikMartens | 2024-05-28T04:23:20Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:09:44Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_11_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_11_fold4
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5026
- F1: 0.6138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5130 | 0.4891 |
| No log | 2.0 | 452 | 0.4726 | 0.5498 |
| 0.4205 | 3.0 | 678 | 0.5026 | 0.6138 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/gBERT_sa_cv_11_fold4 | DiederikMartens | 2024-05-28T04:18:29Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T04:05:42Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_11_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_11_fold4
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5779
- F1: 0.6417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4684 | 0.5523 |
| No log | 2.0 | 452 | 0.4958 | 0.5744 |
| 0.3446 | 3.0 | 678 | 0.5779 | 0.6417 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
furkanbicer/ppo-Huggy | furkanbicer | 2024-05-28T04:15:59Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2024-05-28T04:15:46Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: furkanbicer/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DiederikMartens/eBERT_sa_cv_11_fold3 | DiederikMartens | 2024-05-28T04:11:45Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T03:57:34Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold3
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4772
- F1: 0.5050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5545 | 0.4035 |
| No log | 2.0 | 452 | 0.4711 | 0.4889 |
| 0.5455 | 3.0 | 678 | 0.4772 | 0.5050 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Vichentito/Nahuatl_Espanol_vn | Vichentito | 2024-05-28T04:11:28Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-05-14T21:48:59Z | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Nahuatl_Espanol_vn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nahuatl_Espanol_vn
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1464
- Bleu: 15.4218
- Gen Len: 45.5239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|
| No log | 0.1064 | 100 | 1.1525 | 14.738 | 46.3599 |
| No log | 0.2128 | 200 | 1.1682 | 14.2823 | 45.9297 |
| No log | 0.3191 | 300 | 1.1739 | 14.2118 | 46.4243 |
| No log | 0.4255 | 400 | 1.1799 | 14.3198 | 45.9266 |
| 1.3984 | 0.5319 | 500 | 1.1771 | 14.0972 | 46.2179 |
| 1.3984 | 0.6383 | 600 | 1.1752 | 14.4083 | 45.8709 |
| 1.3984 | 0.7447 | 700 | 1.1756 | 14.1914 | 46.0949 |
| 1.3984 | 0.8511 | 800 | 1.1761 | 14.4131 | 46.0528 |
| 1.3984 | 0.9574 | 900 | 1.1727 | 14.1957 | 46.4856 |
| 1.3826 | 1.0638 | 1000 | 1.1768 | 14.7451 | 45.7873 |
| 1.3826 | 1.1702 | 1100 | 1.1727 | 14.6016 | 45.8654 |
| 1.3826 | 1.2766 | 1200 | 1.1726 | 14.6549 | 45.6857 |
| 1.3826 | 1.3830 | 1300 | 1.1693 | 14.586 | 45.6052 |
| 1.3826 | 1.4894 | 1400 | 1.1704 | 14.6483 | 45.6039 |
| 1.2932 | 1.5957 | 1500 | 1.1638 | 14.921 | 45.5508 |
| 1.2932 | 1.7021 | 1600 | 1.1649 | 14.7977 | 45.3693 |
| 1.2932 | 1.8085 | 1700 | 1.1580 | 14.9676 | 45.7072 |
| 1.2932 | 1.9149 | 1800 | 1.1567 | 14.794 | 45.5877 |
| 1.2932 | 2.0213 | 1900 | 1.1607 | 15.3066 | 45.677 |
| 1.2612 | 2.1277 | 2000 | 1.1569 | 15.1152 | 45.4122 |
| 1.2612 | 2.2340 | 2100 | 1.1553 | 15.2526 | 45.4026 |
| 1.2612 | 2.3404 | 2200 | 1.1521 | 15.2022 | 45.3518 |
| 1.2612 | 2.4468 | 2300 | 1.1505 | 15.3072 | 45.5873 |
| 1.2612 | 2.5532 | 2400 | 1.1500 | 15.417 | 45.5906 |
| 1.2095 | 2.6596 | 2500 | 1.1507 | 15.394 | 45.4383 |
| 1.2095 | 2.7660 | 2600 | 1.1501 | 15.4171 | 45.4846 |
| 1.2095 | 2.8723 | 2700 | 1.1472 | 15.4497 | 45.5049 |
| 1.2095 | 2.9787 | 2800 | 1.1464 | 15.4218 | 45.5239 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hchcsuim/batch-size-16_FFPP-Raw_1FPS_faces-expand-0-aligned | hchcsuim | 2024-05-28T04:10:33Z | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-05-03T12:46:46Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size-16_FFPP-Raw_1FPS_faces-expand-0-aligned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9764940781413078
- name: Precision
type: precision
value: 0.9754191531685138
- name: Recall
type: recall
value: 0.9950429035250464
- name: F1
type: f1
value: 0.9851333122865424
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size-16_FFPP-Raw_1FPS_faces-expand-0-aligned
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0638
- Accuracy: 0.9765
- Precision: 0.9754
- Recall: 0.9950
- F1: 0.9851
- Roc Auc: 0.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0614 | 1.0 | 1377 | 0.0638 | 0.9765 | 0.9754 | 0.9950 | 0.9851 | 0.9982 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DiederikMartens/mBERT_sa_cv_11_fold3 | DiederikMartens | 2024-05-28T04:09:38Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T03:56:14Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_11_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_11_fold3
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4354
- F1: 0.5139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4811 | 0.2954 |
| No log | 2.0 | 452 | 0.4723 | 0.4820 |
| 0.5321 | 3.0 | 678 | 0.4354 | 0.5139 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RustyRobot/Meta-Llama-3-8B-Instruct-Q8_0-GGUF | RustyRobot | 2024-05-28T04:04:41Z | 2 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-05-28T04:04:15Z | ---
language:
- en
license: llama3
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# RustyRobot/Meta-Llama-3-8B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo RustyRobot/Meta-Llama-3-8B-Instruct-Q8_0-GGUF --model meta-llama-3-8b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo RustyRobot/Meta-Llama-3-8B-Instruct-Q8_0-GGUF --model meta-llama-3-8b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m meta-llama-3-8b-instruct-q8_0.gguf -n 128
```
|
jonc/paligemma_vqav2 | jonc | 2024-05-28T04:04:38Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:vq_av2",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
]
| null | 2024-05-28T03:35:38Z | ---
license: gemma
library_name: peft
tags:
- generated_from_trainer
base_model: google/paligemma-3b-pt-224
datasets:
- vq_av2
model-index:
- name: paligemma_vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_vqav2
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
mradermacher/Nyakura-CausalLM-RP-34B-GGUF | mradermacher | 2024-05-28T04:03:21Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-04-05T04:30:24Z | ---
base_model: ParasiticRogue/Nyakura-CausalLM-RP-34B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-34b
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ParasiticRogue/Nyakura-CausalLM-RP-34B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
john001225/acheron_LoRA | john001225 | 2024-05-28T04:00:55Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-05-28T02:06:22Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK acheron
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - john001225/acheron_LoRA
<Gallery />
## Model description
These are john001225/acheron_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK acheron to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](john001225/acheron_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF | mradermacher | 2024-05-28T03:59:48Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-04-09T04:21:27Z | ---
base_model: ParasiticRogue/Nontoxic-PiVoT-Bagel-RP-34b
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-34b
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ParasiticRogue/Nontoxic-PiVoT-Bagel-RP-34b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nontoxic-PiVoT-Bagel-RP-34b-i1-GGUF/resolve/main/Nontoxic-PiVoT-Bagel-RP-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CMU-AIR2/math-llama3-arith2k-mwp2k | CMU-AIR2 | 2024-05-28T03:59:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
]
| null | 2024-05-28T03:46:01Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
CMU-AIR2/math-llama3-arith2k-mwp4k | CMU-AIR2 | 2024-05-28T03:59:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
]
| null | 2024-05-28T03:46:07Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
DiederikMartens/eBERT_sa_cv_11_fold2 | DiederikMartens | 2024-05-28T03:57:30Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T03:43:31Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold2
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5328
- F1: 0.5037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4960 | 0.3946 |
| No log | 2.0 | 452 | 0.5294 | 0.4754 |
| 0.5027 | 3.0 | 678 | 0.5328 | 0.5037 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Dang-gu/fly_stack_finetune | Dang-gu | 2024-05-28T03:56:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
]
| null | 2024-05-28T03:56:29Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: bigcode/starcoder2-3b
model-index:
- name: fly_stack_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fly_stack_finetune
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 10000
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.40.0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1 |
mradermacher/Yi-34B-200K-XLCTX-GGUF | mradermacher | 2024-05-28T03:48:39Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:adamo1139/Yi-34B-200K-XLCTX",
"base_model:quantized:adamo1139/Yi-34B-200K-XLCTX",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-03T13:26:50Z | ---
base_model: adamo1139/Yi-34B-200K-XLCTX
language:
- en
library_name: transformers
license: apache-2.0
license_link: LICENSE
license_name: yi-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-XLCTX-GGUF/resolve/main/Yi-34B-200K-XLCTX.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DiederikMartens/mBERT_sa_cv_11_fold1 | DiederikMartens | 2024-05-28T03:42:38Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T03:29:03Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_11_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_11_fold1
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5091
- F1: 0.6016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4489 | 0.4887 |
| No log | 2.0 | 452 | 0.3971 | 0.5409 |
| 0.4512 | 3.0 | 678 | 0.5091 | 0.6016 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF | mradermacher | 2024-05-28T03:42:10Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"orpo",
"unsloth",
"generated_from_trainer",
"en",
"base_model:baconnier/Gaston_dolphin-2.9.1-yi-1.5-9b",
"base_model:quantized:baconnier/Gaston_dolphin-2.9.1-yi-1.5-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-05-19T22:38:15Z | ---
base_model: baconnier/Gaston_dolphin-2.9.1-yi-1.5-9b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- trl
- orpo
- unsloth
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/baconnier/Gaston_dolphin-2.9.1-yi-1.5-9b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gaston_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Gaston_dolphin-2.9.1-yi-1.5-9b.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF | mradermacher | 2024-05-28T03:39:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T16:16:40Z | ---
base_model: fearlessdots/Alpha-Orionis-2x7B-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/fearlessdots/Alpha-Orionis-2x7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Orionis-2x7B-v0.1-GGUF/resolve/main/Alpha-Orionis-2x7B-v0.1.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DiederikMartens/eBERT_sa_cv_11_fold0 | DiederikMartens | 2024-05-28T03:29:06Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T03:15:00Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_11_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_11_fold0
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5100
- F1: 0.4714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5416 | 0.3084 |
| No log | 2.0 | 452 | 0.5049 | 0.4658 |
| 0.5389 | 3.0 | 678 | 0.5100 | 0.4714 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/mBERT_sa_cv_11_fold0 | DiederikMartens | 2024-05-28T03:28:57Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T03:15:00Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_11_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_11_fold0
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5404
- F1: 0.5529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4888 | 0.4776 |
| No log | 2.0 | 452 | 0.4595 | 0.4610 |
| 0.4729 | 3.0 | 678 | 0.5404 | 0.5529 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BEE-spoke-data/Jamba-900M-doc-writer | BEE-spoke-data | 2024-05-28T03:22:11Z | 101 | 2 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"textbook",
"16384",
"long document",
"en",
"base_model:pszemraj/jamba-900M-v0.13-KIx2",
"base_model:finetune:pszemraj/jamba-900M-v0.13-KIx2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-05-19T19:45:23Z | ---
license: apache-2.0
base_model: pszemraj/jamba-900M-v0.13-KIx2
tags:
- textbook
- '16384'
- long document
metrics:
- accuracy
language:
- en
inference: false
---
# BEE-spoke-data/Jamba-900M-doc-writer
> to test it out, try [this notebook](https://colab.research.google.com/gist/pszemraj/28985fdbbb2460f8375d2d84b8babe9a/jamba-test-sandbox.ipynb)
This model produces long, surprisingly coherent output that extends some input text; you can see an example [here](https://gist.github.com/pszemraj/b7c7ac65e56365cf5eab69622f16b356), which is a generated textbook about underwater city design.

Thanks to the Jamba arch, it uses low VRAM while generating outputs: about 2.5 GB VRAM to generate 12,288 tokens.
## Model description
This model is a fine-tuned version of [pszemraj/jamba-900M-v0.13-KIx2](https://huggingface.co/pszemraj/jamba-900M-v0.13-KIx2) on some textbook data.
It achieves the following results on the evaluation set:
- Loss: 3.0200
- Accuracy: 0.4544
- Num Input Tokens Seen: 4940890112
## Intended Uses & Limitations
- Long context generation
- It requires a rather long prompt (aka 'Introduction') to be coaxed into consistently producing long, textbook-like text
- this model itself is small, so its reasoning, knowledge, etc. is limited, but still impressive for the size (hidden size 1024)
--- |
indirajith-jithu/llama-3-8b-tenjin-Q4_K_M-GGUF | indirajith-jithu | 2024-05-28T03:20:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-05-28T03:20:43Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# indirajith-jithu/llama-3-8b-tenjin-Q4_K_M-GGUF
This model was converted to GGUF format from [`indirajith-jithu/llama-3-8b-tenjin`](https://huggingface.co/indirajith-jithu/llama-3-8b-tenjin) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/indirajith-jithu/llama-3-8b-tenjin) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo indirajith-jithu/llama-3-8b-tenjin-Q4_K_M-GGUF --model llama-3-8b-tenjin-q4_k_m.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo indirajith-jithu/llama-3-8b-tenjin-Q4_K_M-GGUF --model llama-3-8b-tenjin-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m llama-3-8b-tenjin-q4_k_m.gguf -n 128
```
|
SzegedAI/Meta-Llama-3-8B.GPTQ.Q8.WebCorpusHU_D256_S3072 | SzegedAI | 2024-05-28T03:18:00Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
]
| text-generation | 2024-05-28T03:03:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MultiMash11-13B-slerp-GGUF | mradermacher | 2024-05-28T03:15:54Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2",
"Eurdem/megatron_2.1_MoE_2x7B",
"en",
"base_model:allknowingroger/MultiMash11-13B-slerp",
"base_model:quantized:allknowingroger/MultiMash11-13B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T02:30:35Z | ---
base_model: allknowingroger/MultiMash11-13B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2
- Eurdem/megatron_2.1_MoE_2x7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allknowingroger/MultiMash11-13B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MultiMash11-13B-slerp-GGUF/resolve/main/MultiMash11-13B-slerp.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zbigi/gpt2-sentiment_analysis | zbigi | 2024-05-28T03:12:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
]
| null | 2024-05-26T15:34:26Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: openai-community/gpt2
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: gpt2-sentiment_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-sentiment_analysis
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6571
- Accuracy: {'accuracy': 0.8239339752407153}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------:|
| No log | 0.9981 | 257 | 0.4654 | {'accuracy': 0.8239339752407153} |
| 0.6288 | 2.0 | 515 | 0.4266 | {'accuracy': 0.8266850068775791} |
| 0.6288 | 2.9981 | 772 | 0.4558 | {'accuracy': 0.8225584594222833} |
| 0.3201 | 4.0 | 1030 | 0.4550 | {'accuracy': 0.811554332874828} |
| 0.3201 | 4.9981 | 1287 | 0.4223 | {'accuracy': 0.8294360385144429} |
| 0.2464 | 6.0 | 1545 | 0.4637 | {'accuracy': 0.8335625859697386} |
| 0.2464 | 6.9981 | 1802 | 0.5243 | {'accuracy': 0.8184319119669876} |
| 0.1859 | 8.0 | 2060 | 0.5482 | {'accuracy': 0.8335625859697386} |
| 0.1859 | 8.9981 | 2317 | 0.6443 | {'accuracy': 0.8335625859697386} |
| 0.1381 | 9.9806 | 2570 | 0.6571 | {'accuracy': 0.8239339752407153} |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Lycoris53/style-bert-vits2-sakura-miko | Lycoris53 | 2024-05-28T03:02:14Z | 14 | 2 | transformers | [
"transformers",
"Text-To-Speech",
"Style-Bert-VITS2",
"ja",
"dataset:Elite35P-Server/EliteVoiceProject",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T14:58:38Z | ---
language:
- ja
language_creators:
- さくらみこ
- sakura miko
- hololive production
multilinguality:
- monolingual
license: other
tags:
- Text-To-Speech
- Style-Bert-VITS2
datasets:
- Elite35P-Server/EliteVoiceProject
---
# Style-Bert-VITS2 Japanese Only Sakura Miko
こちらは「さくらみこ」の音声データセットに基づいて学習されたVITS-TTSモデルです。
モデルの取得や使い方など自由ですが、趣味の範囲でお願いいたします。
詳しくは [カバー株式会社 二次創作ガイドライン](https://hololive.hololivepro.com/guidelines/) にてご確認ください。
使い方やサンプル音声聞きたい方は [こちら](https://huggingface.co/spaces/Lycoris53/Style-Bert-VITS2-Test)
Style-Bert-VITS2 Sakura miko voice model finetuned using free voice data from [Elite35P-Server/EliteVoiceProject](https://huggingface.co/datasets/Elite35P-Server/EliteVoiceProject)
Finetuning code is from [litagin02/Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2)
See sample usage [HERE](https://huggingface.co/spaces/Lycoris53/Style-Bert-VITS2-Test)
## Model Details
331 annotated wav file train for 100 epoch
日本語の説明などこちらに [AiThinkso.net](https://www.aithinkso.net/)
- **Developed by:** [Lycoris52](https://www.aithinkso.net/)
- **Finetuned from:** [litagin02](https://github.com/litagin02/Style-Bert-VITS2)
- **Dataset from:** [EliteVoiceProject](https://huggingface.co/datasets/Elite35P-Server/EliteVoiceProject/) |
Raneechu/new_combined9_ft | Raneechu | 2024-05-28T03:01:45Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-05-28T03:01:41Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: new_combined9_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_combined9_ft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
SLDGroup/MobileViGv2 | SLDGroup | 2024-05-28T02:59:08Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-05-18T23:46:49Z | ---
license: apache-2.0
---
|
QuantFactory/Llama-3-Instruct-8B-SimPO-GGUF | QuantFactory | 2024-05-28T02:58:25Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.14734",
"base_model:princeton-nlp/Llama-3-Instruct-8B-SimPO",
"base_model:quantized:princeton-nlp/Llama-3-Instruct-8B-SimPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T11:07:21Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO
---
# QuantFactory/Llama-3-Instruct-8B-SimPO-GGUF
This is quantized version of [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
city96/mt5-xl-fp16 | city96 | 2024-05-28T02:58:02Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-05-28T00:40:39Z | ---
license: apache-2.0
---
This is a fp16 safetensors version of [Google's mT5-xl model](https://huggingface.co/google/mt5-xl) to be used in downstream inference tasks.
This repository contains both the encoder and decoder part of the model. For just the encoder, use the following repository: [`city96/mt5-xl-encoder-fp16`](https://huggingface.co/city96/mt5-xl-encoder-fp16)
|
gxisme/lora_mode | gxisme | 2024-05-28T02:57:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T02:56:50Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** gxisme
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
QuantFactory/Mistral-7B-Base-SFT-RDPO-GGUF | QuantFactory | 2024-05-28T02:55:44Z | 201 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.14734",
"base_model:princeton-nlp/Mistral-7B-Base-SFT-RDPO",
"base_model:quantized:princeton-nlp/Mistral-7B-Base-SFT-RDPO",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-05-27T14:31:05Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Base-SFT-RDPO
---
# QuantFactory/Mistral-7B-Base-SFT-RDPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Base-SFT-RDPO](https://huggingface.co/princeton-nlp/Mistral-7B-Base-SFT-RDPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
QuantFactory/Mistral-7B-Base-SFT-SimPO-GGUF | QuantFactory | 2024-05-28T02:54:53Z | 84 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.14734",
"base_model:princeton-nlp/Mistral-7B-Base-SFT-SimPO",
"base_model:quantized:princeton-nlp/Mistral-7B-Base-SFT-SimPO",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-05-27T15:55:39Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Base-SFT-SimPO
---
# QuantFactory/Mistral-7B-Base-SFT-SimPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Base-SFT-SimPO](https://huggingface.co/princeton-nlp/Mistral-7B-Base-SFT-SimPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
MangoFish/Chat | MangoFish | 2024-05-28T02:53:45Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-05-28T02:53:44Z | ---
license: apache-2.0
---
|
namespace-Pt/ultragist-mistral-7b-inst | namespace-Pt | 2024-05-28T02:52:38Z | 429 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"arxiv:2405.16635",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T07:44:58Z | ---
license: mit
pipeline_tag: text-generation
---
<div align="center">
<h1>UltraGist for Mistral-7B-Instruct-v0.2</h1>
[<a href="https://arxiv.org/abs/2405.16635">Paper</a>] [<a href="https://github.com/namespace-Pt/UltraGist">Github</a>]
</div>
UltraGist is a context compression method can **flexibly**, **effectively**, and **efficiently** to handle various context lengths and compression ratios. We apply UltraGist on [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
## Usage
```python
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "namespace-Pt/ultragist-mistral-7b-inst"
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
# load the entire model on the default gpu
device_map={"": "cuda"},
# you can manually set the compression ratio, otherwise the model will automatically choose the most suitable compression ratio from [2,4,8,16,32]
# ultragist_ratio=[8],
).eval()
with torch.no_grad():
# long context
with open("data/nqa.json", encoding="utf-8") as f:
example = json.load(f)
content = f"Read this article:\n\n{example['context']}\n\nNow, answer the question based on the above context.\nQuestion:\n{example['input']}"
messages = [{"role": "user", "content": content}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
# reset memory before new compression task
model.memory.reset()
# directly call generate to progressively compress the context while generating next tokens
outputs = model.generate(**inputs, do_sample=False, top_p=1, temperature=1, max_new_tokens=40)[:, inputs["input_ids"].shape[1]:]
print("*"*20)
print(f"Input size: {inputs['input_ids'].shape[1]}")
print(f"Question: {example['input']}")
print(f"Answers: {example['answers']}")
print(f"Prediction: {tokenizer.decode(outputs[0], skip_special_tokens=True)}")
print("*"*20)
# extract the compressed memory (including the generated tokens)
compressed_memory = model.memory.get_memory()
ultragist_size, raw_size, sink_size = model.memory.get_memory_size()
print(f"UltraGist size: {ultragist_size}")
print(f"Raw size: {raw_size}")
print(f"Sink size: {sink_size}")
print(f"Memory: {compressed_memory[0][0].shape}")
print("*"*20)
```
|
namespace-Pt/ultragist-llama2-7b-chat | namespace-Pt | 2024-05-28T02:52:13Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"custom_code",
"arxiv:2405.16635",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-26T18:08:36Z | ---
license: mit
pipeline_tag: text-generation
---
<div align="center">
<h1>UltraGist for Llama-2-7b-chat</h1>
[<a href="https://arxiv.org/abs/2405.16635">Paper</a>] [<a href="https://github.com/namespace-Pt/UltraGist">Github</a>]
</div>
UltraGist is a context compression method can **flexibly**, **effectively**, and **efficiently** to handle various context lengths and compression ratios. We apply UltraGist on [Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
## Usage
```python
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "namespace-Pt/ultragist-llama2-7b-chat"
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
# load the entire model on the default gpu
device_map={"": "cuda"},
# you can manually set the compression ratio, otherwise the model will automatically choose the most suitable compression ratio from [2,4,8,16,32]
# ultragist_ratio=[8],
).eval()
with torch.no_grad():
# long context
with open("data/nqa.json", encoding="utf-8") as f:
example = json.load(f)
content = f"Read this article:\n\n{example['context']}\n\nNow, answer the question based on the above context.\nQuestion:\n{example['input']}"
messages = [{"role": "user", "content": content}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
# reset memory before new compression task
model.memory.reset()
# directly call generate to progressively compress the context while generating next tokens
outputs = model.generate(**inputs, do_sample=False, top_p=1, temperature=1, max_new_tokens=40)[:, inputs["input_ids"].shape[1]:]
print("*"*20)
print(f"Input size: {inputs['input_ids'].shape[1]}")
print(f"Question: {example['input']}")
print(f"Answers: {example['answers']}")
print(f"Prediction: {tokenizer.decode(outputs[0], skip_special_tokens=True)}")
print("*"*20)
# extract the compressed memory (including the generated tokens)
compressed_memory = model.memory.get_memory()
ultragist_size, raw_size, sink_size = model.memory.get_memory_size()
print(f"UltraGist size: {ultragist_size}")
print(f"Raw size: {raw_size}")
print(f"Sink size: {sink_size}")
print(f"Memory: {compressed_memory[0][0].shape}")
print("*"*20)
```
|
AdamKasumovic/bloomz-7b1-LIMA-OA-en | AdamKasumovic | 2024-05-28T02:49:48Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T02:48:00Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DownwardSpiral33/gpt2-imdb-pos-roberta16-256_0_5-full-2024.05.28.01.56 | DownwardSpiral33 | 2024-05-28T02:40:13Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T02:39:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ksgk-fy/ecoach_philippine_v1 | Ksgk-fy | 2024-05-28T02:38:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T02:37:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VatsalPatel18/GATv2Decoder | VatsalPatel18 | 2024-05-28T02:36:52Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"omics-graph-network",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T02:03:54Z | ---
license: bsd-3-clause
---
|
Raneechu/new_combined10_ft | Raneechu | 2024-05-28T02:33:36Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-05-28T02:33:33Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: new_combined10_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_combined10_ft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
LiteLLMs/Llama3-ChatQA-1.5-8B-GGUF | LiteLLMs | 2024-05-28T02:32:57Z | 12 | 0 | null | [
"gguf",
"nvidia",
"chatqa-1.5",
"chatqa",
"llama-3",
"pytorch",
"GGUF",
"text-generation",
"en",
"arxiv:2401.10225",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-05-06T22:22:17Z |
---
language:
- en
license: llama3
tags:
- nvidia
- chatqa-1.5
- chatqa
- llama-3
- pytorch
- GGUF
pipeline_tag: text-generation
quantized_by: andrijdavid
---
# Llama3-ChatQA-1.5-8B-GGUF
- Original model: [Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama3-ChatQA-1.5-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama3-ChatQA-1.5-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama3-ChatQA-1.5-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama3-ChatQA-1.5-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama3-ChatQA-1.5-8B
## Model Details
We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA paper](https://arxiv.org/pdf/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format. **For more information about ChatQA, check the [website](https://chatqa-project.github.io/)!**
## Other Resources
[Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B) [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) [Website](https://chatqa-project.github.io/) [Paper](https://arxiv.org/pdf/2401.10225)
## Benchmark Results
Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows:
| | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
| --: | :: | :: | :: | :---: |
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 |
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 |
| QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 51.46 | 48.05 | 49.03 | 51.40 |
| CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 77.73 | 78.57 | 76.46 | 78.44 |
| DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 41.60 | 51.94 | 49.60 | 50.67 |
| ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 84.16 | 73.69 | 78.46 | 81.88 |
| SQA | 61.87 | 74.07 | 69.61 | 79.21 | 79.98 | 69.14 | 73.28 | 83.82 |
| TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 48.32 | 50.98 | 49.96 | 55.63 |
| HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 47.86 | 56.44 | 65.76 | 68.27 |
| INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 33.75 | 31.90 | 30.10 | 32.31 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 |
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 models use HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found [here](https://huggingface.co/datasets/nvidia/ChatRAG-Bench).
## Prompt Format
**We highly recommend that you use the prompt format we provide, as follows:**
### when context is available
<pre>
System: {System}
{Context}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
</pre>
### when context is not available
<pre>
System: {System}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
</pre>
**The content of the system's turn (i.e., {System}) for both scenarios is as follows:**
<pre>
This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context.
</pre>
**Note that our ChatQA-1.5 models are optimized for the capability with context, e.g., over documents or retrieved context.**
## How to use
### take the whole document as context
This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "nvidia/Llama3-ChatQA-1.5-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"}
]
document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |"""
def get_formatted_input(messages, context):
system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context."
instruction = "Please give a full and complete answer for the question."
for item in messages:
if item['role'] == "user":
## only apply this instruction for the first user turn
item['content'] = instruction + " " + item['content']
break
conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:"
formatted_input = system + "\n\n" + context + "\n\n" + conversation
return formatted_input
formatted_input = get_formatted_input(messages, document)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### run retrieval to get top-n chunks as context
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B/tree/main/docs) for users to play with.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import torch
import json
## load ChatQA-1.5 tokenizer and model
model_id = "nvidia/Llama3-ChatQA-1.5-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
## load retriever tokenizer and model
retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder')
query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder')
context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder')
## prepare documents, we take landrover car manual document that we provide as an example
chunk_list = json.load(open("docs.json"))['landrover']
messages = [
{"role": "user", "content": "how to connect the bluetooth in the car?"}
]
### running retrieval
## convert query into a format as follows:
## user: {user}\nagent: {agent}\nuser: {user}
formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip()
query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt')
ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt')
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
## Compute similarity scores using dot product and rank the similarity
similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx)
ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx)
## get top-n chunks (n=5)
retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]]
context = "\n\n".join(retrieved_chunks)
### running text generation
formatted_input = get_formatted_input(messages, context)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Correspondence to
Zihan Liu ([email protected]), Wei Ping ([email protected])
## Citation
<pre>
@article{liu2024chatqa,
title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG},
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
journal={arXiv preprint arXiv:2401.10225},
year={2024}}
</pre>
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
<!-- original-model-card end -->
|
x2bee/POLAR-14B-DPO-v1.4 | x2bee | 2024-05-28T02:30:46Z | 798 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T07:53:05Z | ---
library_name: transformers
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details

## Model Description
<!-- Provide a longer summary of what this model is/does. -->
POLAR is a Korean LLM developed by Plateer's AI-lab. It was inspired by Upstage's SOLAR. We will continue to evolve this model and hope to contribute to the Korean LLM ecosystem.
- **Developed by:** AI-Lab of Plateer(Woomun Jung, Eunsoo Ha, MinYoung Joo, Seongjun Son)
- **Model type:** Language model
- **Language(s) (NLP):** ko
- **License:** apache-2.0
- Parent Model: x2bee/POLAR-14B-v0.2
- wiki Data 를 활용하여 DPO 데이터 1.4 만건 생성하여 학습했습니다.
## Direct Use
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("x2bee/POLAR-14B-DPO-v1.4")
model = AutoModelForCausalLM.from_pretrained("x2bee/POLAR-14B-DPO-v1.4")
```
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
If you would like more information about our company, please visit the link below.
[tech.x2bee.com](https://tech.x2bee.com/)
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Woomun Jung, MinYoung Joo, Eunsu Ha, Seungjun Son
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details> |
nassersala/text-to-color | nassersala | 2024-05-28T02:29:08Z | 3 | 1 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2024-05-28T01:23:17Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model-index:
- name: model-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: burkelibbey/colors
type:
field_instruction: description
field_output: color
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# model-out
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
quangtqv/bi_encoder_tool_learning_best_model_28_5_2024 | quangtqv | 2024-05-28T02:27:45Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-05-28T02:27:10Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# quangtqv/bi_encoder_tool_learning_best_model_28_5_2024
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_best_model_28_5_2024')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_best_model_28_5_2024)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
city96/mt5-xl-encoder-fp16 | city96 | 2024-05-28T02:24:13Z | 84 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T01:14:21Z | ---
license: apache-2.0
---
This is a fp16 safetensors version of [Google's mT5-xl model](https://huggingface.co/google/mt5-xl) to be used in downstream inference tasks.
This repository only contains the encoder part of the model. For the full model, use the following repository: [`city96/mt5-xl-fp16`](https://huggingface.co/city96/mt5-xl-fp16)
This model is meant to be used with [HunYuanDiT](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT). |
T3Q-LLM/T3Q-LMM3-MK-v1.0 | T3Q-LLM | 2024-05-28T02:23:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-28T01:52:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
## Evaluation
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LMM3-MK-v1.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5613|± |0.0132|
| | |macro_f1|0.4616|± |0.0128|
|kobest_copa | 0|acc |0.6710|± |0.0149|
| | |macro_f1|0.6705|± |0.0149|
|kobest_hellaswag| 0|acc |0.4560|± |0.0223|
| | |acc_norm|0.5940|± |0.0220|
| | |macro_f1|0.4543|± |0.0223|
|kobest_sentineg | 0|acc |0.6423|± |0.0241|
| | |macro_f1|0.6108|± |0.0252|
|
zzzyuqing/light-geo-controlnet | zzzyuqing | 2024-05-28T02:19:49Z | 8 | 21 | diffusers | [
"diffusers",
"art",
"controlnet",
"stable-diffusion",
"image-to-image",
"arxiv:2405.17176",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"region:us"
]
| image-to-image | 2024-05-27T11:43:26Z | ---
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- art
- controlnet
- stable-diffusion
- image-to-image
---
# Geometry- and Light-aware ControlNet

The geometry- and light-aware ControlNet uses an object's normal and depth maps as geometry conditions and six predefined materials with a given environment light as lighting conditions. Our model generates images that align with the given geometry and environment light.

## Material Generation
This ControlNet can be used in distillation process to generate PBR materials. Please refer to the paper "DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models" for technical details.
<p>
<a href=https://zzzyuqing.github.io/dreammat.github.io/>Project Page</a> •
<a href=https://arxiv.org/abs/2405.17176>Arxiv</a> •
<a href=https://github.com/zzzyuqing/DreamMat>GitHub</a>
</p>


## Training Dataset
We train the geometry- and light-aware ControlNet from the images which are rendered on the objects in the LVIS subset of the Objaverse. Since the names and tags of objects in this dataset are rather noisy, we employ BLIP for captioning all rendered images. We render 16 random views for every object under randomly chosen environment light maps. The light condition maps are obtained by using ray tracing in Blender, which represents the radiance for different materials under the environment light. For normal maps, we transform the model's normal vectors into view space and flip the x-axis following ScanNet’s protocol. Depth maps are processed by inverting the real depth values and normalizing them.
## 📖 Citation
```bib
@inproceedings{zhang2024dreammat,
title={DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models},
author={Zhang, Yuqing and Liu, Yuan and Xie, Zhiyu and Yang, Lei and Liu, Zhongyuan and Yang, Mengzhou and Zhang, Runze and Kou, Qilong and and Lin, Cheng and Wang, Wenping and Jin, Xiaogang},
booktitle={SIGGRAPH},
year={2024}
}
``` |
nttwt1597/test_v2_cancer_v3_new | nttwt1597 | 2024-05-28T02:19:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T02:18:03Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** nttwt1597
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits