modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
leeharok/llama-3-8b-chat-doctor | leeharok | 2024-10-28T08:15:06Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-23T02:41:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
reasonwang/ToolGen-Llama-3-8B | reasonwang | 2024-10-28T08:12:08Z | 36 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"agent",
"toolgen",
"tool",
"tool-learning",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-07T10:17:52Z | ---
library_name: transformers
tags:
- agent
- toolgen
- tool
- tool-learning
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tomhata/ul-wk2-gpt-neo-1.3B | tomhata | 2024-10-28T08:03:31Z | 151 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"dataset:orpo-explorers/distilabel-math-preference",
"base_model:EleutherAI/gpt-neo-1.3B",
"base_model:finetune:EleutherAI/gpt-neo-1.3B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T07:34:54Z | ---
library_name: transformers
datasets:
- orpo-explorers/distilabel-math-preference
metrics:
- accuracy
base_model:
- EleutherAI/gpt-neo-1.3B
---
# Model Card for Model ID
## Model Details
GPT-Neo 1.3B model, fine-tuned on math-based prompts using ORPO.
### Model Description
Evaluated on mmlu elementary mathematics dataset using Eleuther evaluation harness
| Tasks |Version|Filter|n-shot|Metric| |Value | |Stderr|
|----------------------|------:|------|-----:|------|---|-----:|---|-----:|
|elementary_mathematics| 1|none | 0|acc |↑ |0.2778|± | 0.023|
This is a slight improvment over the base model, which has an accuracy of 0.2751 ± 0.023. However, this improvement in accuracy is within variance.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
## Uses
As a starting point for questions regarding math concepts. Actual calculations and values may not be reliable.
### Out-of-Scope Use
Not to be used as a replacement to learn actual math.
## Bias, Risks, and Limitations
This model is unlikely to produce accurate numerical results.
## Training Details
### Training Data
[orpo-explorers/distilabel-math-preference](https://huggingface.co/datasets/orpo-explorers/distilabel-math-preference)
### Training Procedure
1000 entries were randomly sampled. This subsample was further split 90/10 into train/test.
#### Preprocessing [optional]
Entries were split into prompt, chosen, rejected triplets. Maximum length for each of these entries was limited to 2048 characters.
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
zwzzz/MentalGLM | zwzzz | 2024-10-28T07:58:30Z | 9 | 1 | null | [
"safetensors",
"chatglm",
"text-generation",
"conversational",
"custom_code",
"zh",
"arxiv:2410.10323",
"base_model:THUDM/glm-4-9b",
"base_model:finetune:THUDM/glm-4-9b",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-10-12T14:14:37Z | ---
license: apache-2.0
language:
- zh
base_model:
- THUDM/glm-4-9b
pipeline_tag: text-generation
---
# MentalGLM is a series of large language models designed for mental health analysis tasks in Chinese.
We have developed the MentalGLM series, the first open-source LLMs designed for explainable mental health analysis targeting Chinese social media, based on GLM-4-9b and GLM-4-9b-chat.
## How to use
```bash
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("zwzzz/MentalGLM", trust_remote_code=True)
query = "考虑以下这个帖子,帖子体现了什么认知路径?这已经够糟糕的了。不过在那一周我将完全失去我的支持。我没有什么可期待的。"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
return_dict=True
)
inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
"zwzzz/MentalGLM",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to(device).eval()
gen_kwargs = {"max_length": 1000, "do_sample": True, "top_k": 1}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
Article address:[https://arxiv.org/pdf/2410.10323.pdf](https://arxiv.org/pdf/2410.10323.pdf)
```bash
@article{zhai2024mentalglm,
title={MentalGLM Series: Explainable Large Language Models for Mental Health Analysis on Chinese Social Media},
author={Zhai, Wei and Bai, Nan and Zhao, Qing and Li, Jianqiang and Wang, Fan and Qi, Hongzhi and Jiang, Meng and Wang, Xiaoqin and Yang, Bing Xiang and Fu, Guanghui},
journal={arXiv preprint arXiv:2410.10323},
year={2024}
}
``` |
zwzzz/MentalGLM-chat | zwzzz | 2024-10-28T07:57:58Z | 6 | 2 | null | [
"safetensors",
"chatglm",
"text-generation",
"conversational",
"custom_code",
"zh",
"arxiv:2410.10323",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-10-12T10:33:36Z | ---
license: apache-2.0
language:
- zh
pipeline_tag: text-generation
---
# MentalGLM is a series of large language models designed for mental health analysis tasks in Chinese.
We have developed the MentalGLM series, the first open-source LLMs designed for explainable mental health analysis targeting Chinese social media, based on GLM-4-9b and GLM-4-9b-chat.
## How to use
```bash
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("zwzzz/MentalGLM-chat", trust_remote_code=True)
query = "考虑以下这个帖子,帖子体现了什么认知路径?这已经够糟糕的了。不过在那一周我将完全失去我的支持。我没有什么可期待的。"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
return_dict=True
)
inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
"zwzzz/MentalGLM-chat",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to(device).eval()
gen_kwargs = {"max_length": 1000, "do_sample": True, "top_k": 1}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
Article address:[https://arxiv.org/pdf/2410.10323.pdf](https://arxiv.org/pdf/2410.10323.pdf)
```bash
@article{zhai2024mentalglm,
title={MentalGLM Series: Explainable Large Language Models for Mental Health Analysis on Chinese Social Media},
author={Zhai, Wei and Bai, Nan and Zhao, Qing and Li, Jianqiang and Wang, Fan and Qi, Hongzhi and Jiang, Meng and Wang, Xiaoqin and Yang, Bing Xiang and Fu, Guanghui},
journal={arXiv preprint arXiv:2410.10323},
year={2024}
}
``` |
glif-loradex-trainer/lemnop_hologram_plus | glif-loradex-trainer | 2024-10-28T07:54:44Z | 168 | 2 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-10-28T07:54:09Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730101911495__000003000_0.jpg
text: hgr4m+, human
- output:
url: samples/1730101936125__000003000_1.jpg
text: hgr4m+, skull
- output:
url: samples/1730101960763__000003000_2.jpg
text: hgr4m+, flower
- output:
url: samples/1730101985402__000003000_3.jpg
text: hgr4m+, cat
- output:
url: samples/1730102010029__000003000_4.jpg
text: hgr4m+, globe
- output:
url: samples/1730102034666__000003000_5.jpg
text: hgr4m+, dice
base_model: black-forest-labs/FLUX.1-dev
trigger: hgr4m+
instance_prompt: hgr4m+
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# hologram_plus
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `lemnop`.
<Gallery />
## Trigger words
You should use `hgr4m+` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/lemnop_hologram_plus/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Yutaka77/bert-base-japanese-v3-wrime-sentiment-yutaka2 | Yutaka77 | 2024-10-28T07:49:50Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T07:49:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abdulmannan-01/Llama-3.1-8b-finetuned-for-sql-generation-bnb-4bit | abdulmannan-01 | 2024-10-28T07:48:41Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-27T16:12:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **Developed by:** Abdul Mannan
- **Finetuned from model:** meta-llama/Llama-3.1-8B-Instruct
|
kiwi1229/cross_encoder_sample | kiwi1229 | 2024-10-28T07:42:05Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T07:41:47Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DBMe/Midnight-Miqu-103B-v1.0-3.53bpw-h6-exl2 | DBMe | 2024-10-28T07:41:06Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-10-26T10:40:43Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: other
---
Quantized model => https://huggingface.co/sophosympatheia/Midnight-Miqu-103B-v1.0
**Quantization Details:**
Quantization is done using turboderp's ExLlamaV2 v0.2.3.
I use the default calibration datasets and arguments. The repo also includes a "measurement.json" file, which was used during the quantization process.
For models with bits per weight (BPW) over 6.0, I default to quantizing the `lm_head` layer at 8 bits instead of the standard 6 bits.
---
**Who are you? What's with these weird BPWs on [insert model here]?**
I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig is built using 2 x 3090s with a Ryzen APU (APU used solely for desktop output—no VRAM wasted on the 3090s). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.
|
TenzinGayche/example.v2 | TenzinGayche | 2024-10-28T07:40:47Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T07:33:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BAAI/CCI3-HQ-Classifier | BAAI | 2024-10-28T07:25:41Z | 149 | 9 | null | [
"safetensors",
"xlm-roberta",
"en",
"zh",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:apache-2.0",
"region:us"
] | null | 2024-08-30T03:14:53Z | ---
language:
- en
- zh
license: apache-2.0
base_model:
- BAAI/bge-m3
---
# CCI3-HQ-Classifier
## Model summary
This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 145k annotations generated by [Qwen2-72B-instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) for web samples from [CCI3](https://huggingface.co/datasets/BAAI/CCI3-Data) dataset.
We used this classifier to build [CCI3-HQ](https://huggingface.co/datasets/BAAI/CCI3-HQ) dataset.
### How to use in transformers
To load the classifier, use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("BAAI/cci3-hq-classifier")
model = AutoModelForSequenceClassification.from_pretrained("BAAI/cci3-hq-classifier")
text = "曾巩:为人廉洁奉公,才华横溢,关心民间疾苦曾巩,字子固,是我国北宋时期著名的文学家,政治家和教育家。他的一生政绩颇丰,为百姓们做出了许多的好事,在文学创作上他又是北宋诗文革新的主要人物。他文章写得耐人寻味,表露了自己的真情实感。被后人称之为 唐宋八大家之一 。"
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score
}
print(result)
```
## Training
The classifier was trained on 145,000 pairs of web samples and their scores from 0 to 5, generated by Qwen2. The samples were annotated based on their educational quality with 0 being not educational and 5 being highly educational.
The prompt used for annotation mostly reuses [FineWeb-edu prompt](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/fjZQ4izIj1rx1xQnBTKKr.png).
We added a classification head with a single regression output to [BGE-M3](https://huggingface.co/BAAI/bge-m3) and trained the model for 20 epochs with a learning rate of 3e-4. During training, the embedding and encoder layers were frozen to focus on the classification head and dropout was not used. The model achieved an F1 score of 73% when converted to a binary classifier using a score threshold of 3.
**Training Details:**
- Model: BGE-M3 with a classification head
- Dataset: 145,000 samples from Qwen2 annotations
- Epochs: 20
- Learning Rate: 3e-4
- Evaluation Metric: F1 score
**Classification report**
We treat the regression model's predictions as discrete classes to calculate the metrics on a hold-out set of 1500 Qwen2-annotated samples.
```
precision recall f1-score support
0 0.76 0.58 0.66 3890
1 0.55 0.62 0.58 4896
2 0.40 0.51 0.45 2703
3 0.38 0.42 0.40 1536
4 0.59 0.27 0.37 972
5 0.33 0.06 0.10 83
accuracy 0.54 14080
macro avg 0.50 0.41 0.43 14080
weighted avg 0.56 0.54 0.54 14080
```
**Confusion matrix**
We verify that the predicted educational scores are indeed close to their ground truth, and are mostry impacted by the noisy annotation.
```
2244 1514 126 6 0 0
690 3035 1049 117 5 0
y_true 24 878 1383 398 20 0
0 118 651 643 124 0
1 13 202 482 264 10
0 0 6 39 33 5
y_pred
```
## Limitations
While the CCI3-HQ classifier performs well in distinguishing high-quality educational content for CCI3 dataset, there are some limitations:
- **Scope**: The model's performance may vary across different datasets, particularly when applied to out-of-distribution samples. It is specifically designed to handle educational content related to primary and grade school levels and may exhibit lower performance on content intended for higher education or specialized domains.
- **Bias**: The model's performance relies on the quality and representativeness of both the training data and the LLM used for annotation. Biases in either can influence the classifier's decisions. There is a risk of overfitting to content that appears more academic, leading to higher scores. We recommend using an `int_score >= 3` as a threshold for data curation.
- **Context**: The classifier operates by evaluating individual web pages or extracts without considering the broader context, which may limit its effectiveness in certain scenarios.
The training and inference code is available on GitHub
https://github.com/FlagAI-Open/FlagAI/tree/master/examples/CCI3-HQ
## References
1. https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
2. https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier |
GoldenLlama/krx_qwen2.5_7b_it_v8 | GoldenLlama | 2024-10-28T07:23:55Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"krx",
"unsloth",
"trl",
"sft",
"text-generation",
"conversational",
"ko",
"en",
"dataset:amphora/krx-sample-instructions",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-10-28T06:40:59Z | ---
base_model:
- unsloth/Qwen2.5-7B-Instruct
datasets:
- amphora/krx-sample-instructions
language:
- ko
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- krx
- unsloth
- trl
- sft
---
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png" height="100" width="200" align="center" /> |
liusincere/Esha | liusincere | 2024-10-28T07:20:40Z | 73 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-24T11:42:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
tags:
- flux
- diffusers
- lora
--- |
Aarushhh/untrained-yuno-225M | Aarushhh | 2024-10-28T07:09:21Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T07:08:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MatanP/m4t-model-FineTuned | MatanP | 2024-10-28T07:08:09Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"seamless_m4t",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-28T06:51:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
devagonal/flan-t5-rouge-durga-q5-clean-4c | devagonal | 2024-10-28T07:02:15Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-28T07:01:32Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-rouge-durga-q5-clean-4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-rouge-durga-q5-clean-4c
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2052
- Rouge1: 0.4357
- Rouge2: 0.2982
- Rougel: 0.4326
- Rougelsum: 0.4327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.438 | 1.0 | 9 | 1.9807 | 0.2583 | 0.0732 | 0.2536 | 0.2534 |
| 2.6463 | 2.0 | 18 | 1.7018 | 0.2588 | 0.0705 | 0.2521 | 0.2525 |
| 1.7877 | 3.0 | 27 | 1.4796 | 0.2840 | 0.0824 | 0.2764 | 0.2765 |
| 2.0417 | 4.0 | 36 | 1.3040 | 0.3100 | 0.1072 | 0.3028 | 0.3027 |
| 2.0216 | 5.0 | 45 | 1.1630 | 0.3322 | 0.1262 | 0.3255 | 0.3246 |
| 1.7093 | 6.0 | 54 | 1.0289 | 0.3359 | 0.1283 | 0.3283 | 0.3285 |
| 1.6109 | 7.0 | 63 | 0.9288 | 0.3728 | 0.1752 | 0.3631 | 0.3628 |
| 1.3041 | 8.0 | 72 | 0.8358 | 0.3691 | 0.1709 | 0.3592 | 0.3593 |
| 1.3242 | 9.0 | 81 | 0.7609 | 0.3666 | 0.1744 | 0.3573 | 0.3579 |
| 1.0971 | 10.0 | 90 | 0.6803 | 0.3724 | 0.1809 | 0.3659 | 0.3663 |
| 0.7156 | 11.0 | 99 | 0.6153 | 0.3742 | 0.1833 | 0.3634 | 0.3637 |
| 0.8419 | 12.0 | 108 | 0.5537 | 0.3748 | 0.1870 | 0.3645 | 0.3655 |
| 0.8853 | 13.0 | 117 | 0.5012 | 0.3775 | 0.1986 | 0.3681 | 0.3687 |
| 1.0922 | 14.0 | 126 | 0.4396 | 0.3738 | 0.1960 | 0.3629 | 0.3634 |
| 0.8752 | 15.0 | 135 | 0.4022 | 0.3844 | 0.2097 | 0.3755 | 0.3762 |
| 0.8189 | 16.0 | 144 | 0.3810 | 0.4050 | 0.2350 | 0.3970 | 0.3975 |
| 0.639 | 17.0 | 153 | 0.3503 | 0.4039 | 0.2341 | 0.3977 | 0.3976 |
| 0.7971 | 18.0 | 162 | 0.3162 | 0.4082 | 0.2428 | 0.4022 | 0.4028 |
| 0.7211 | 19.0 | 171 | 0.3069 | 0.4174 | 0.2504 | 0.4131 | 0.4128 |
| 0.7633 | 20.0 | 180 | 0.2804 | 0.4204 | 0.2562 | 0.4154 | 0.4167 |
| 0.6475 | 21.0 | 189 | 0.2685 | 0.4308 | 0.2750 | 0.4269 | 0.4274 |
| 0.5642 | 22.0 | 198 | 0.2498 | 0.4232 | 0.2700 | 0.4175 | 0.4184 |
| 0.66 | 23.0 | 207 | 0.2377 | 0.4311 | 0.2832 | 0.4246 | 0.4249 |
| 0.6004 | 24.0 | 216 | 0.2335 | 0.4298 | 0.2868 | 0.4255 | 0.4257 |
| 0.6263 | 25.0 | 225 | 0.2216 | 0.4252 | 0.2806 | 0.4211 | 0.4212 |
| 0.4931 | 26.0 | 234 | 0.2146 | 0.4274 | 0.2858 | 0.4232 | 0.4236 |
| 0.5072 | 27.0 | 243 | 0.2091 | 0.4309 | 0.2862 | 0.4266 | 0.4267 |
| 0.5079 | 28.0 | 252 | 0.2069 | 0.4354 | 0.2969 | 0.4315 | 0.4324 |
| 0.494 | 29.0 | 261 | 0.2058 | 0.4326 | 0.2965 | 0.4290 | 0.4299 |
| 0.6008 | 30.0 | 270 | 0.2052 | 0.4357 | 0.2982 | 0.4326 | 0.4327 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Scuccorese/Florence-2-FT-food-ingredients-v2 | Scuccorese | 2024-10-28T07:00:55Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-10-28T06:57:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
microsoft/xclip-large-patch14 | microsoft | 2024-10-28T06:59:35Z | 82,542 | 11 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xclip",
"feature-extraction",
"vision",
"video-classification",
"en",
"arxiv:2208.02816",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | video-classification | 2022-09-07T15:17:38Z | ---
language: en
license: mit
tags:
- vision
- video-classification
model-index:
- name: nielsr/xclip-large-patch14
results:
- task:
type: video-classification
dataset:
name: Kinetics 400
type: kinetics-400
metrics:
- type: top-1 accuracy
value: 87.1
- type: top-5 accuracy
value: 97.6
---
# X-CLIP (large-sized model)
X-CLIP model (large-sized, patch resolution of 14) trained fully-supervised on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP).
This model was trained using 8 frames per video, at a resolution of 224x224.
Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs.

This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval.
## Intended uses & limitations
You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#).
## Training data
This model was trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics).
### Preprocessing
The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247).
The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285).
During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
## Evaluation results
This model achieves a top-1 accuracy of 87.1% and a top-5 accuracy of 97.6%.
|
DianLiI/genbio-model | DianLiI | 2024-10-28T06:55:17Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"genbio",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-10-28T06:45:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HyeongSoo/qwen-0.5b-sft-full | HyeongSoo | 2024-10-28T06:54:48Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T14:32:42Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: qwen-0.5b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen-0.5b-sft-full
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2863 | 1.0 | 30584 | 1.4074 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
theprint/Boptruth-NeuralMonarch-7B | theprint | 2024-10-28T06:51:34Z | 75 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"nbeerbower/bophades-mistral-truthy-DPO-7B",
"mlabonne/NeuralMonarch-7B",
"unsloth",
"base_model:mlabonne/NeuralMonarch-7B",
"base_model:merge:mlabonne/NeuralMonarch-7B",
"base_model:nbeerbower/bophades-mistral-truthy-DPO-7B",
"base_model:merge:nbeerbower/bophades-mistral-truthy-DPO-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-09T16:13:47Z | ---
tags:
- merge
- mergekit
- lazymergekit
- nbeerbower/bophades-mistral-truthy-DPO-7B
- mlabonne/NeuralMonarch-7B
- unsloth
base_model:
- nbeerbower/bophades-mistral-truthy-DPO-7B
- mlabonne/NeuralMonarch-7B
license: apache-2.0
pipeline_tag: text-generation
---
# Boptruth-NeuralMonarch-7B
Boptruth-NeuralMonarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [nbeerbower/bophades-mistral-truthy-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B)
* [mlabonne/NeuralMonarch-7B](https://huggingface.co/mlabonne/NeuralMonarch-7B)
## 🚨 Use the alpaca prompt format
If you use standard ChatML, you may end up with <|im_end|> tokens at the end of responses.
## 👀 Looking for GGUF?
Find quantized versions of this model [right here](https://huggingface.co/mradermacher/Boptruth-NeuralMonarch-7B-GGUF).
## 🧩 Configuration
```yaml
slices:
- sources:
- model: nbeerbower/bophades-mistral-truthy-DPO-7B
layer_range: [0, 32]
- model: mlabonne/NeuralMonarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/bophades-mistral-truthy-DPO-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "theprint/Boptruth-NeuralMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
devagonal/flan-t5-rouge-durga-q5-clean-4b | devagonal | 2024-10-28T06:50:10Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-28T06:49:22Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-rouge-durga-q5-clean-4b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-rouge-durga-q5-clean-4b
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
- Rouge1: 0.7378
- Rouge2: 0.7126
- Rougel: 0.7379
- Rougelsum: 0.7390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.0584 | 1.0 | 9 | 1.6093 | 0.2822 | 0.0866 | 0.2756 | 0.2752 |
| 1.9958 | 2.0 | 18 | 1.1569 | 0.3261 | 0.1042 | 0.3177 | 0.3186 |
| 1.174 | 3.0 | 27 | 0.8836 | 0.3770 | 0.1669 | 0.3656 | 0.3660 |
| 1.1673 | 4.0 | 36 | 0.6420 | 0.3646 | 0.1590 | 0.3569 | 0.3580 |
| 1.0302 | 5.0 | 45 | 0.4727 | 0.3987 | 0.2234 | 0.3940 | 0.3943 |
| 0.6135 | 6.0 | 54 | 0.3187 | 0.4167 | 0.2439 | 0.4102 | 0.4102 |
| 0.5838 | 7.0 | 63 | 0.2294 | 0.4542 | 0.3007 | 0.4478 | 0.4462 |
| 0.4479 | 8.0 | 72 | 0.1891 | 0.4618 | 0.3175 | 0.4579 | 0.4569 |
| 0.3936 | 9.0 | 81 | 0.1373 | 0.4664 | 0.3152 | 0.4624 | 0.4606 |
| 0.3307 | 10.0 | 90 | 0.1073 | 0.5085 | 0.3889 | 0.5069 | 0.5064 |
| 0.3624 | 11.0 | 99 | 0.0845 | 0.5074 | 0.3887 | 0.5061 | 0.5055 |
| 0.1817 | 12.0 | 108 | 0.0702 | 0.5456 | 0.4416 | 0.5444 | 0.5436 |
| 0.2335 | 13.0 | 117 | 0.0705 | 0.5132 | 0.4077 | 0.5136 | 0.5125 |
| 0.1604 | 14.0 | 126 | 0.0650 | 0.5486 | 0.4418 | 0.5464 | 0.5455 |
| 0.1306 | 15.0 | 135 | 0.0540 | 0.5469 | 0.4508 | 0.5468 | 0.5467 |
| 0.1194 | 16.0 | 144 | 0.0489 | 0.5935 | 0.5103 | 0.5925 | 0.5931 |
| 0.2133 | 17.0 | 153 | 0.0441 | 0.5746 | 0.4862 | 0.5732 | 0.5736 |
| 0.1035 | 18.0 | 162 | 0.0425 | 0.5799 | 0.4981 | 0.5786 | 0.5798 |
| 0.1049 | 19.0 | 171 | 0.0333 | 0.6341 | 0.5608 | 0.6325 | 0.6325 |
| 0.1165 | 20.0 | 180 | 0.0287 | 0.6398 | 0.5755 | 0.6390 | 0.6379 |
| 0.1197 | 21.0 | 189 | 0.0300 | 0.5988 | 0.5223 | 0.5995 | 0.5996 |
| 0.0607 | 22.0 | 198 | 0.0245 | 0.6465 | 0.5810 | 0.6458 | 0.6453 |
| 0.1443 | 23.0 | 207 | 0.0238 | 0.6454 | 0.5820 | 0.6475 | 0.6470 |
| 0.0727 | 24.0 | 216 | 0.0188 | 0.6769 | 0.6239 | 0.6764 | 0.6770 |
| 0.0462 | 25.0 | 225 | 0.0177 | 0.6926 | 0.6368 | 0.6918 | 0.6923 |
| 0.0804 | 26.0 | 234 | 0.0132 | 0.6979 | 0.6512 | 0.6975 | 0.6988 |
| 0.0337 | 27.0 | 243 | 0.0135 | 0.6971 | 0.6450 | 0.6970 | 0.6977 |
| 0.0459 | 28.0 | 252 | 0.0131 | 0.7019 | 0.6564 | 0.7019 | 0.7029 |
| 0.0233 | 29.0 | 261 | 0.0102 | 0.7089 | 0.6671 | 0.7096 | 0.7096 |
| 0.0228 | 30.0 | 270 | 0.0112 | 0.7057 | 0.6645 | 0.7055 | 0.7063 |
| 0.0435 | 31.0 | 279 | 0.0080 | 0.7125 | 0.6717 | 0.7117 | 0.7130 |
| 0.0364 | 32.0 | 288 | 0.0114 | 0.7108 | 0.6653 | 0.7102 | 0.7098 |
| 0.0112 | 33.0 | 297 | 0.0086 | 0.7184 | 0.6786 | 0.7182 | 0.7192 |
| 0.0325 | 34.0 | 306 | 0.0068 | 0.7268 | 0.6917 | 0.7267 | 0.7274 |
| 0.0173 | 35.0 | 315 | 0.0052 | 0.7327 | 0.7016 | 0.7317 | 0.7330 |
| 0.0599 | 36.0 | 324 | 0.0058 | 0.7291 | 0.6969 | 0.7297 | 0.7293 |
| 0.0125 | 37.0 | 333 | 0.0044 | 0.7336 | 0.7057 | 0.7338 | 0.7347 |
| 0.0155 | 38.0 | 342 | 0.0054 | 0.7238 | 0.6865 | 0.7241 | 0.7246 |
| 0.0199 | 39.0 | 351 | 0.0050 | 0.7293 | 0.6970 | 0.7294 | 0.7295 |
| 0.0109 | 40.0 | 360 | 0.0035 | 0.7348 | 0.7077 | 0.7352 | 0.7355 |
| 0.0229 | 41.0 | 369 | 0.0034 | 0.7348 | 0.7077 | 0.7352 | 0.7355 |
| 0.0353 | 42.0 | 378 | 0.0033 | 0.7348 | 0.7077 | 0.7352 | 0.7355 |
| 0.0124 | 43.0 | 387 | 0.0035 | 0.7357 | 0.7080 | 0.7359 | 0.7364 |
| 0.0147 | 44.0 | 396 | 0.0033 | 0.7330 | 0.7032 | 0.7333 | 0.7331 |
| 0.0055 | 45.0 | 405 | 0.0032 | 0.7322 | 0.7023 | 0.7324 | 0.7325 |
| 0.0183 | 46.0 | 414 | 0.0031 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.004 | 47.0 | 423 | 0.0033 | 0.7350 | 0.7069 | 0.7353 | 0.7365 |
| 0.0195 | 48.0 | 432 | 0.0032 | 0.7331 | 0.7019 | 0.7323 | 0.7333 |
| 0.0112 | 49.0 | 441 | 0.0031 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0186 | 50.0 | 450 | 0.0029 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0043 | 51.0 | 459 | 0.0028 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.011 | 52.0 | 468 | 0.0023 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0203 | 53.0 | 477 | 0.0021 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0099 | 54.0 | 486 | 0.0021 | 0.7377 | 0.7128 | 0.7376 | 0.7391 |
| 0.0095 | 55.0 | 495 | 0.0021 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.021 | 56.0 | 504 | 0.0021 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0191 | 57.0 | 513 | 0.0022 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0033 | 58.0 | 522 | 0.0021 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0264 | 59.0 | 531 | 0.0021 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
| 0.0034 | 60.0 | 540 | 0.0021 | 0.7378 | 0.7126 | 0.7379 | 0.7390 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
yunguks/walk1009-gptq-new | yunguks | 2024-10-28T06:48:48Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"exaone",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-10-28T05:29:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nicolofelicioni/pythia-1b-sft-hh-hts-8 | nicolofelicioni | 2024-10-28T06:39:12Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T06:35:30Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Crysiss/checkpoint | Crysiss | 2024-10-28T06:38:42Z | 201 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T06:37:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
T-EVEN/krx_Qwen2.5-7B-merged | T-EVEN | 2024-10-28T06:36:55Z | 6 | 0 | null | [
"safetensors",
"qwen2",
"krx",
"license:apache-2.0",
"region:us"
] | null | 2024-10-28T00:34:23Z | ---
license: apache-2.0
tags:
- krx
--- |
iwan-rg/XLM-Roberta-base-Finetuned-EN-AR-Parallel | iwan-rg | 2024-10-28T06:33:36Z | 5 | 0 | null | [
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-10-28T06:33:05Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-Parallel-mlm-0.15-base-27OCT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-Parallel-mlm-0.15-base-27OCT
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| No log | 0.0998 | 100 | 1.4204 |
| No log | 0.1997 | 200 | 1.3432 |
| No log | 0.2995 | 300 | 1.3054 |
| No log | 0.3993 | 400 | 1.2756 |
| 1.5915 | 0.4992 | 500 | 1.2552 |
| 1.5915 | 0.5990 | 600 | 1.2327 |
| 1.5915 | 0.6988 | 700 | 1.2152 |
| 1.5915 | 0.7987 | 800 | 1.2012 |
| 1.5915 | 0.8985 | 900 | 1.2002 |
| 1.3946 | 0.9983 | 1000 | 1.1854 |
| 1.3946 | 1.0982 | 1100 | 1.1824 |
| 1.3946 | 1.1980 | 1200 | 1.1723 |
| 1.3946 | 1.2979 | 1300 | 1.1589 |
| 1.3946 | 1.3977 | 1400 | 1.1490 |
| 1.321 | 1.4975 | 1500 | 1.1387 |
| 1.321 | 1.5974 | 1600 | 1.1356 |
| 1.321 | 1.6972 | 1700 | 1.1252 |
| 1.321 | 1.7970 | 1800 | 1.1259 |
| 1.321 | 1.8969 | 1900 | 1.1182 |
| 1.2735 | 1.9967 | 2000 | 1.1144 |
| 1.2735 | 2.0965 | 2100 | 1.0966 |
| 1.2735 | 2.1964 | 2200 | 1.1005 |
| 1.2735 | 2.2962 | 2300 | 1.0952 |
| 1.2735 | 2.3960 | 2400 | 1.0935 |
| 1.235 | 2.4959 | 2500 | 1.0840 |
| 1.235 | 2.5957 | 2600 | 1.0766 |
| 1.235 | 2.6955 | 2700 | 1.0719 |
| 1.235 | 2.7954 | 2800 | 1.0665 |
| 1.235 | 2.8952 | 2900 | 1.0644 |
| 1.1954 | 2.9950 | 3000 | 1.0656 |
| 1.1954 | 3.0949 | 3100 | 1.0574 |
| 1.1954 | 3.1947 | 3200 | 1.0495 |
| 1.1954 | 3.2945 | 3300 | 1.0475 |
| 1.1954 | 3.3944 | 3400 | 1.0452 |
| 1.1707 | 3.4942 | 3500 | 1.0399 |
| 1.1707 | 3.5940 | 3600 | 1.0363 |
| 1.1707 | 3.6939 | 3700 | 1.0291 |
| 1.1707 | 3.7937 | 3800 | 1.0338 |
| 1.1707 | 3.8936 | 3900 | 1.0348 |
| 1.1509 | 3.9934 | 4000 | 1.0319 |
| 1.1509 | 4.0932 | 4100 | 1.0219 |
| 1.1509 | 4.1931 | 4200 | 1.0214 |
| 1.1509 | 4.2929 | 4300 | 1.0161 |
| 1.1509 | 4.3927 | 4400 | 1.0158 |
| 1.1275 | 4.4926 | 4500 | 1.0153 |
| 1.1275 | 4.5924 | 4600 | 1.0067 |
| 1.1275 | 4.6922 | 4700 | 1.0058 |
| 1.1275 | 4.7921 | 4800 | 1.0097 |
| 1.1275 | 4.8919 | 4900 | 1.0037 |
| 1.1127 | 4.9917 | 5000 | 1.0048 |
| 1.1127 | 5.0916 | 5100 | 1.0022 |
| 1.1127 | 5.1914 | 5200 | 0.9947 |
| 1.1127 | 5.2912 | 5300 | 0.9947 |
| 1.1127 | 5.3911 | 5400 | 0.9907 |
| 1.0944 | 5.4909 | 5500 | 0.9909 |
| 1.0944 | 5.5907 | 5600 | 0.9861 |
| 1.0944 | 5.6906 | 5700 | 0.9858 |
| 1.0944 | 5.7904 | 5800 | 0.9861 |
| 1.0944 | 5.8902 | 5900 | 0.9791 |
| 1.0847 | 5.9901 | 6000 | 0.9787 |
| 1.0847 | 6.0899 | 6100 | 0.9744 |
| 1.0847 | 6.1897 | 6200 | 0.9752 |
| 1.0847 | 6.2896 | 6300 | 0.9712 |
| 1.0847 | 6.3894 | 6400 | 0.9723 |
| 1.0662 | 6.4893 | 6500 | 0.9706 |
| 1.0662 | 6.5891 | 6600 | 0.9688 |
| 1.0662 | 6.6889 | 6700 | 0.9692 |
| 1.0662 | 6.7888 | 6800 | 0.9655 |
| 1.0662 | 6.8886 | 6900 | 0.9637 |
| 1.0559 | 6.9884 | 7000 | 0.9629 |
| 1.0559 | 7.0883 | 7100 | 0.9618 |
| 1.0559 | 7.1881 | 7200 | 0.9622 |
| 1.0559 | 7.2879 | 7300 | 0.9605 |
| 1.0559 | 7.3878 | 7400 | 0.9560 |
| 1.0439 | 7.4876 | 7500 | 0.9562 |
| 1.0439 | 7.5874 | 7600 | 0.9566 |
| 1.0439 | 7.6873 | 7700 | 0.9515 |
| 1.0439 | 7.7871 | 7800 | 0.9514 |
| 1.0439 | 7.8869 | 7900 | 0.9542 |
| 1.0358 | 7.9868 | 8000 | 0.9504 |
| 1.0358 | 8.0866 | 8100 | 0.9502 |
| 1.0358 | 8.1864 | 8200 | 0.9494 |
| 1.0358 | 8.2863 | 8300 | 0.9451 |
| 1.0358 | 8.3861 | 8400 | 0.9461 |
| 1.0242 | 8.4859 | 8500 | 0.9447 |
| 1.0242 | 8.5858 | 8600 | 0.9455 |
| 1.0242 | 8.6856 | 8700 | 0.9441 |
| 1.0242 | 8.7854 | 8800 | 0.9399 |
| 1.0242 | 8.8853 | 8900 | 0.9410 |
| 1.0198 | 8.9851 | 9000 | 0.9391 |
| 1.0198 | 9.0850 | 9100 | 0.9390 |
| 1.0198 | 9.1848 | 9200 | 0.9379 |
| 1.0198 | 9.2846 | 9300 | 0.9382 |
| 1.0198 | 9.3845 | 9400 | 0.9377 |
| 1.0094 | 9.4843 | 9500 | 0.9363 |
| 1.0094 | 9.5841 | 9600 | 0.9354 |
| 1.0094 | 9.6840 | 9700 | 0.9353 |
| 1.0094 | 9.7838 | 9800 | 0.9351 |
| 1.0094 | 9.8836 | 9900 | 0.9342 |
| 1.011 | 9.9835 | 10000 | 0.9339 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.1.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Crysiss/sql_gpt2_124m_sft | Crysiss | 2024-10-28T06:28:30Z | 222 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T06:27:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gctian/qwen2.5-32B-roleplay-zh | gctian | 2024-10-28T06:27:49Z | 13 | 2 | null | [
"safetensors",
"qwen2",
"roleplay",
"zh",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-10-28T02:00:42Z | ---
license: apache-2.0
language:
- zh
- en
base_model:
- Qwen/Qwen2.5-32B-Instruct
tags:
- roleplay
---
以 qwen2.5-32B-Instruct为基模型,使用 LLama-Factory 在制作的大规模角色扮演数据集上全参微调的角色扮演大模型,经评测,角色扮演能力超 Doubao-Pro
推理使用 LLama-Factory部署,指定 `--template qwen` |
gctian/qwen2.5-14B-roleplay-zh | gctian | 2024-10-28T06:26:31Z | 21 | 4 | null | [
"safetensors",
"qwen2",
"roleplay",
"zh",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-10-28T03:14:18Z | ---
license: apache-2.0
language:
- zh
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct
tags:
- roleplay
---
以 qwen2.5-14B-Chat为基模型,使用 LLama-Factory 在制作的大规模角色扮演数据集上全参微调的角色扮演大模型,经评测,角色扮演能力超 Doubao-Pro
推理使用 LLama-Factory部署,指定 `--template qwen` |
BigHuggyD/TheDrummer_Behemoth-123B-v1.1_exl2_5.0bpw_h6 | BigHuggyD | 2024-10-28T06:23:09Z | 6 | 1 | null | [
"safetensors",
"mistral",
"license:other",
"5-bit",
"exl2",
"region:us"
] | null | 2024-10-28T05:49:39Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 🦣 - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v1.1-GGUF (recommended for smaller quants)
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
outlookAi/WEHqXSM1BK | outlookAi | 2024-10-28T06:14:20Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-28T05:38:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Pornchanart
---
# Wehqxsm1Bk
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Pornchanart` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/WEHqXSM1BK', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
JingyeChen22/textdiffuser2_layout_planner | JingyeChen22 | 2024-10-28T05:51:18Z | 1,285 | 4 | diffusers | [
"diffusers",
"pytorch",
"llama",
"text-to-image",
"arxiv:2311.16465",
"license:mit",
"region:us"
] | text-to-image | 2023-12-10T00:54:19Z | ---
pipeline_tag: text-to-image
library_name: diffusers
license: mit
---
# Model
This repo contains the layout planner model of the paper [TextDiffuser-2: Unleashing the Power of Language Models for Text Rendering](https://huggingface.co/papers/2311.16465).
# Usage
The script [here](https://github.com/microsoft/unilm/tree/master/textdiffuser-2#firecracker-inference) can be used to perform inference with the model. |
Covvenheimer/taylor_swift_model | Covvenheimer | 2024-10-28T05:50:52Z | 16 | 0 | null | [
"gguf",
"llama",
"text-generation",
"qlora",
"lyrics",
"taylor-swift",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T05:19:12Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation
- qlora
- llama
- gguf
- lyrics
- taylor-swift
license: apache-2.0
language: en
pipeline_tag: text-generation
---
# Taylor Swift Lyrics Llama Model (3.2, GGUF Format)
- **Base Model**: unsloth/llama-3.2-1b-bnb-4bit
- **Fine-tuned on**: Taylor Swift lyrics using QLoRA
- **Format**: GGUF (Not compatible with the `transformers` library; requires `llama-cpp-python`)
- **License**: Apache-2.0
- **Developed by**: Covvenheimer and Team
This model, fine-tuned on Taylor Swift lyrics, is tailored for generating text in the style of her songs. It was trained with a 2x speed improvement using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face’s TRL library, specifically optimized for GGUF format.
> **Important:** This model requires `llama-cpp-python` to run. It is incompatible with the `transformers` library due to its GGUF format.
## Installation and Setup
To load and run this model, install the `llama-cpp-python` library and download the model file from the Hugging Face Hub.
### Step-by-Step Code Example
1. **Install llama-cpp-python**
```python
!pip install llama-cpp-python
```
2. **Load the Model with llama-cpp**
```python
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
# Define your model repository and file name
repo_id = "Covvenheimer/taylor_swift_model"
filename = "unsloth.Q4_K_M.gguf"
# Download the GGUF model file from Hugging Face
model_path = hf_hub_download(repo_id=repo_id, filename=filename)
# Load the model using llama-cpp-python
llm = Llama(model_path=model_path)
```
3. **Generate Text Using a Prompt**
```python
# Define a prompt for generating lyrics
prompt = """You are a songwriter composing a song in the style of Taylor Swift.
Write lyrics that reflect her themes and musical style, focusing on Love, Heartbreak, Romance."""
# Generate lyrics
output = llm(prompt, max_tokens=512, temperature=0.8)
print(output["choices"][0]["text"])
```
> **Tip:** To customize the themes, change the last line of the prompt. For example, you can modify the focus to other themes like "Friendship, Growth, Resilience." Replace `"Love, Heartbreak, Romance"` with your chosen themes.
This setup will allow you to use the model efficiently and generate lyrics in the style of Taylor Swift.
---
# Uploaded model
- **Developed by:** Covvenheimer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
binh230/deberta-base | binh230 | 2024-10-28T05:39:09Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-26T14:50:38Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: deberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1665
- Accuracy: 0.9601
- Precision: 0.9599
- Recall: 0.9601
- F1: 0.9594
- Auroc: 0.9928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- label_smoothing_factor: 0.03
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auroc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 0.4866 | 0.0988 | 256 | 0.2931 | 0.8845 | 0.8939 | 0.8845 | 0.8876 | 0.9465 |
| 0.2757 | 0.1977 | 512 | 0.3478 | 0.8898 | 0.8984 | 0.8898 | 0.8765 | 0.9544 |
| 0.2433 | 0.2965 | 768 | 0.2097 | 0.9404 | 0.9413 | 0.9404 | 0.9408 | 0.9799 |
| 0.2332 | 0.3953 | 1024 | 0.3548 | 0.8815 | 0.8907 | 0.8815 | 0.8657 | 0.9690 |
| 0.2152 | 0.4942 | 1280 | 0.1942 | 0.9440 | 0.9434 | 0.9440 | 0.9426 | 0.9868 |
| 0.1907 | 0.5930 | 1536 | 0.1615 | 0.9649 | 0.9647 | 0.9649 | 0.9647 | 0.9899 |
| 0.1865 | 0.6918 | 1792 | 0.1556 | 0.9655 | 0.9654 | 0.9655 | 0.9654 | 0.9922 |
| 0.1865 | 0.7907 | 2048 | 0.2322 | 0.9369 | 0.9370 | 0.9369 | 0.9344 | 0.9773 |
| 0.168 | 0.8895 | 2304 | 0.1653 | 0.9672 | 0.9670 | 0.9672 | 0.9668 | 0.9937 |
| 0.1732 | 0.9883 | 2560 | 0.1467 | 0.9702 | 0.9716 | 0.9702 | 0.9706 | 0.9935 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
nazimasker/Image-Caption-University-model | nazimasker | 2024-10-28T05:34:28Z | 67 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-28T05:09:44Z | ---
library_name: transformers
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: Image-Caption-University-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Image-Caption-University-model
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4060
- Wer Score: 2.0586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-------:|:----:|:---------------:|:---------:|
| 7.1651 | 6.6667 | 50 | 4.6332 | 2.9074 |
| 2.6998 | 13.3333 | 100 | 1.0675 | 2.1821 |
| 0.3956 | 20.0 | 150 | 0.3752 | 2.5494 |
| 0.0633 | 26.6667 | 200 | 0.3804 | 1.9321 |
| 0.0196 | 33.3333 | 250 | 0.3981 | 2.4105 |
| 0.0141 | 40.0 | 300 | 0.4050 | 2.0679 |
| 0.0115 | 46.6667 | 350 | 0.4060 | 2.0586 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
nhyha/N3N_internlm2_5-7b-chat-llamafied_1028_0425 | nhyha | 2024-10-28T05:34:18Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:nhyha/internlm2_5-7b-chat-llamafied_v4",
"base_model:finetune:nhyha/internlm2_5-7b-chat-llamafied_v4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T05:26:21Z | ---
base_model: nhyha/internlm2_5-7b-chat-llamafied_v4
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** nhyha
- **License:** apache-2.0
- **Finetuned from model :** nhyha/internlm2_5-7b-chat-llamafied_v4
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
madmaxx26/Llama_3.1_8B_Instruct_MovieRec | madmaxx26 | 2024-10-28T05:33:52Z | 6 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-10-28T04:51:58Z | ---
license: apache-2.0
---
|
HappyAIUser/Test2 | HappyAIUser | 2024-10-28T05:27:23Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T05:18:24Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HappyAIUser
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf | RichardErkhov | 2024-10-28T05:03:07Z | 17 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T16:42:10Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-30b_checkpoint-6500_20241020-024234-merged - GGUF
- Model creator: https://huggingface.co/Nutanix/
- Original model: https://huggingface.co/Nutanix/llama-30b_checkpoint-6500_20241020-024234-merged/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q2_K.gguf) | Q2_K | 11.22GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K_S.gguf) | Q3_K_S | 13.1GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K.gguf) | Q3_K | 14.69GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K_M.gguf) | Q3_K_M | 14.69GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q3_K_L.gguf) | Q3_K_L | 16.09GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.IQ4_XS.gguf) | IQ4_XS | 16.28GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q4_0.gguf) | Q4_0 | 17.1GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.IQ4_NL.gguf) | IQ4_NL | 17.19GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q4_K_S.gguf) | Q4_K_S | 17.21GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q4_K.gguf) | Q4_K | 18.27GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q4_K_M.gguf) | Q4_K_M | 18.27GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q4_1.gguf) | Q4_1 | 18.98GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q5_0.gguf) | Q5_0 | 20.86GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q5_K_S.gguf) | Q5_K_S | 20.86GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q5_K.gguf) | Q5_K | 21.46GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q5_K_M.gguf) | Q5_K_M | 21.46GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q5_1.gguf) | Q5_1 | 22.74GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q6_K.gguf) | Q6_K | 24.85GB |
| [llama-30b_checkpoint-6500_20241020-024234-merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/Nutanix_-_llama-30b_checkpoint-6500_20241020-024234-merged-gguf/blob/main/llama-30b_checkpoint-6500_20241020-024234-merged.Q8_0.gguf) | Q8_0 | 32.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mathofdynamic/mamali-lora-03 | mathofdynamic | 2024-10-28T05:02:57Z | 5 | 0 | diffusers | [
"diffusers",
"Lora",
"image-to-image",
"en",
"dataset:crystantine/fluxgym",
"license:artistic-2.0",
"region:us"
] | image-to-image | 2024-10-24T10:46:29Z | ---
license: artistic-2.0
datasets:
- crystantine/fluxgym
language:
- en
metrics:
- accuracy
new_version: black-forest-labs/FLUX.1-dev
pipeline_tag: image-to-image
library_name: diffusers
tags:
- Lora
--- |
mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF | mradermacher | 2024-10-28T04:57:31Z | 136 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"fine-tuned",
"instruction-tuned",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:artificialguybr/LLAMA-3.2-1B-OpenHermes2.5",
"base_model:quantized:artificialguybr/LLAMA-3.2-1B-OpenHermes2.5",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-19T09:07:31Z | ---
base_model: artificialguybr/LLAMA-3.2-1B-OpenHermes2.5
datasets:
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- generated_from_trainer
- fine-tuned
- instruction-tuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/artificialguybr/LLAMA-3.2-1B-OpenHermes2.5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q6_K.gguf) | Q6_K | 1.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3.2-1B-OpenHermes2.5-GGUF/resolve/main/LLAMA-3.2-1B-OpenHermes2.5.f16.gguf) | f16 | 3.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MoE-Girl-800MA-3BT-GGUF | mradermacher | 2024-10-28T04:55:57Z | 287 | 3 | transformers | [
"transformers",
"gguf",
"axolotl",
"moe",
"roleplay",
"en",
"base_model:allura-org/MoE-Girl-800MA-3BT",
"base_model:quantized:allura-org/MoE-Girl-800MA-3BT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T11:26:37Z | ---
base_model: allura-org/MoE-Girl-800MA-3BT
language:
- en
library_name: transformers
license: apache-2.0
no_imatrix: Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a very
low-bit quantization
quantized_by: mradermacher
tags:
- axolotl
- moe
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allura-org/MoE-Girl-800MA-3BT
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/resolve/main/MoE-Girl-800MA-3BT.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
iecjsu/Llama-32-3B-SFT-sentiment | iecjsu | 2024-10-28T04:54:50Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T04:53:30Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** iecjsu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
olabs-ai/unsloth-Llama-3.2-1B-Instruct-bnb-4bit-GGUF | olabs-ai | 2024-10-28T04:48:50Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T04:45:46Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** olabs-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hyobi18220/jam_krx_qwen2.5_v1 | hyobi18220 | 2024-10-28T04:41:31Z | 10 | 0 | null | [
"safetensors",
"qwen2",
"krx",
"en",
"ko",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2024-10-28T02:35:59Z | ---
language:
- en
- ko
base_model:
- unsloth/Qwen2.5-7B-Instruct
tags:
- krx
--- |
olabs-ai/unsloth-Llama-3.2-1B-bnb-4bit | olabs-ai | 2024-10-28T04:37:38Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-28T04:34:06Z | ---
base_model: unsloth/Llama-3.2-1B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** olabs-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
2ch/penflux | 2ch | 2024-10-28T04:31:01Z | 37,346 | 3 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-10-28T04:13:10Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
close-up of thick average length penis, overhanging foreskin, dripping translucent fluid, clear viscous fluid dripping from tip
output:
url: images/ComfyUI_01622_.png
- text: >-
realistic, highres, outdoors, medieval village, overcast sky, photo of young rugged medieval men with skimpy (loose open shirt:1.5) and (large erect penis:1.3) out of torn medieval style pants, beard, testicles, lying on back, spread legs, masturbating, fat belly, cute, blond
output:
url: images/ComfyUI_01855_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: >-
uncut penis, flaccid penis, overhanging foreskin, dripping translucent fluid,
clear viscous fluid dripping from tip
language:
- en
pipeline_tag: text-to-image
---
# xpenis
<Gallery />
## Model description
[https://civitai.com/models/961155](https://civitai.com/models/751269)
## Trigger words
You should use `uncut penis` to trigger the image generation.
You should use `flaccid penis` to trigger the image generation.
You should use `overhanging foreskin` to trigger the image generation.
You should use `dripping translucent fluid` to trigger the image generation.
You should use `clear viscous fluid dripping from tip` to trigger the image generation.
## Download model
Weights for this model are available in PyTorch format.
[Download](/2ch/penflux/tree/main) them in the Files & versions tab. |
artificialguybr/QWEN-2.5-0.5B-Synthia-II-gguf | artificialguybr | 2024-10-28T04:13:54Z | 28 | 1 | transformers | [
"transformers",
"gguf",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:migtissera/Synthia-v1.5-II",
"base_model:artificialguybr/QWEN-2.5-0.5B-Synthia-II",
"base_model:quantized:artificialguybr/QWEN-2.5-0.5B-Synthia-II",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T04:01:55Z | ---
base_model: artificialguybr/QWEN-2.5-0.5B-Synthia-II
datasets:
- migtissera/Synthia-v1.5-II
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: artificialguybr
tags:
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
Thanks [Redmond.AI](https://redmond.ai/) for GPU Sponsor!
Quantization for: https://huggingface.co/artificialguybr/QWEN-2.5-0.5B-Synthia-II
## How to use
If you are unsure how to use GGUF files, look at the [TheBloke
READMEs](https://huggingface.co/TheBloke/CodeLlama-70B-Python-GGUF) for
more details, including on how to concatenate multi-part files.
|
Prabesh06/videomae-base-finetuned-AbnormalVideos | Prabesh06 | 2024-10-28T03:57:51Z | 64 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-10-28T03:15:58Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-AbnormalVideos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-AbnormalVideos
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7391
- Accuracy: 0.0698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 202
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.3988 | 0.5050 | 102 | 2.8485 | 0.0679 |
| 2.1951 | 1.4950 | 202 | 2.7391 | 0.0698 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Kishor798/speecht5_finetuned_TTS_Kishor798 | Kishor798 | 2024-10-28T03:56:42Z | 83 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-28T03:42:55Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_TTS_Kishor798
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_TTS_Kishor798
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.6018 | 2.7586 | 100 | 0.5301 |
| 0.52 | 5.5172 | 200 | 0.4727 |
| 0.4919 | 8.2759 | 300 | 0.4605 |
| 0.4598 | 11.0345 | 400 | 0.4450 |
| 0.4425 | 13.7931 | 500 | 0.4395 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Data-Selection/BSL-1B | Data-Selection | 2024-10-28T03:55:49Z | 128 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2410.07064",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-08T14:51:25Z | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## BSL-1B
[paper](https://arxiv.org/abs/2410.07064) | [code](https://github.com/microsoft/LMOps/tree/main/data_selection)
**BSL-1B** is a 1B model with [Mistral](https://arxiv.org/abs/2310.06825) achitecture pre-trained from scratch on the CC split of [Redpajama](https://github.com/togethercomputer/RedPajama-Data).
**It is used as the baseline for [PDS-1B](https://huggingface.co/Data-Selection/PDS-1B).**
### Evaluation
PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/6undIr37d10qD73TDiPDK.png" width="600">
</p>
### Citation
```bibtex
@article{gu2024data,
title={Data Selection via Optimal Control for Language Models},
author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
journal={arXiv preprint arXiv:2410.07064},
year={2024}
}
```
|
hao1306/ww | hao1306 | 2024-10-28T03:53:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-17T13:17:58Z | ---
license: apache-2.0
---
|
MatanP/m4t-tamil-model-FineTuned | MatanP | 2024-10-28T03:51:41Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"seamless_m4t",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-28T03:36:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zixianma/mllava_baseline_580k_new-seq_length_8192-lr_1e-5-global_bs_128-epoch_1-fully_finetune-filtered | zixianma | 2024-10-28T03:50:51Z | 5 | 0 | null | [
"safetensors",
"llava",
"generated_from_trainer",
"base_model:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"base_model:finetune:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"license:llama3",
"region:us"
] | null | 2024-10-27T08:20:24Z | ---
license: llama3
base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind
tags:
- generated_from_trainer
model-index:
- name: mllava_baseline_580k_new-seq_length_8192-lr_1e-5-global_bs_128-epoch_1-fully_finetune-filtered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zixian-ma-salesforce/Mantis/runs/ctotk8o3)
# mllava_baseline_580k_new-seq_length_8192-lr_1e-5-global_bs_128-epoch_1-fully_finetune-filtered
This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.43.0
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
HappyAIUser/Test | HappyAIUser | 2024-10-28T03:50:15Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T03:41:26Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HappyAIUser
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MeshaalMo/finetuning_experiment_14_gguf | MeshaalMo | 2024-10-28T03:46:07Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-28T03:45:11Z | ---
base_model: unsloth/Llama-3.2-3B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** MeshaalMo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bhuvana-ak7/OrpoLlama-3.2-1B-V1 | bhuvana-ak7 | 2024-10-28T03:41:41Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T01:01:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of meta-llama/Llama-3.2-1B, using ORPO (Optimized Regularization for Prompt Optimization) Trainer.
This model is fine-tuned using the mlabonne/orpo-dpo-mix-40k dataset.
Only 1000 data samples were used to train quickly using ORPO.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The base model meta-llama/Llama-3.2-1B has been fine-tuned using ORPO on a few samples of mlabonne/orpo-dpo-mix-40k dataset.
The Llama 3.2 instruction-tuned text-only model is optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
This fine-tuned version is aimed at improving the understanding of the context in prompts and thereby increasing the interpretability of the model.
- **Finetuned from model [meta-llama/Llama-3.2-1B]**
- **Model Size: 1 Billion parameters**
- **Fine-tuning Method: ORPO**
- **Dataset: mlabonne/orpo-dpo-mix-40k**
## Evaluation
The model was evaluated on the following benchmarks, with the following performance metrics:
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag| 1|none | 0|acc |↑ |0.4772|± |0.0050|
| | |none | 0|acc_norm|↑ |0.6366|± |0.0048|
|tinyMMLU| 0|none | 0|acc_norm|↑ |0.4306|± | N/A|
|eq_bench| 2.1|none | 0|eqbench |↑ |-12.9709|± |2.9658|
| | |none | 0|percent_parseable|↑ | 92.9825|± |1.9592|
|
relaxml/Llama-2-7b-chat-QTIP-3Bit | relaxml | 2024-10-28T03:41:30Z | 6 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T01:55:48Z | 
|
relaxml/Llama-2-13b-chat-QTIP-3Bit | relaxml | 2024-10-28T03:41:17Z | 6 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T02:00:59Z | 
|
relaxml/Llama-2-70b-chat-QTIP-3Bit | relaxml | 2024-10-28T03:41:02Z | 5 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T02:17:57Z | 
|
Turbo-AI/multilingual-e5-large__trim_vocab | Turbo-AI | 2024-10-28T03:40:01Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-28T03:39:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
relaxml/Llama-2-70b-chat-QTIP-4Bit | relaxml | 2024-10-28T03:39:34Z | 8 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T03:23:01Z | 
|
relaxml/Llama-2-70b-chat-QTIP-2Bit | relaxml | 2024-10-28T03:39:15Z | 6 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-05T13:08:59Z | 
|
relaxml/Llama-2-7b-chat-QTIP-2Bit | relaxml | 2024-10-28T03:38:37Z | 137 | 1 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-05T12:53:01Z | 
|
KR-X-AI/Qwen2-7B-Instruct-v0 | KR-X-AI | 2024-10-28T03:31:31Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"krx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T07:04:05Z | ---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KR-X-AI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
KR-X-AI/krx-qwen2-7b-instruct-v0-text | KR-X-AI | 2024-10-28T03:31:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T20:24:55Z | ---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KR-X-AI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
pingel/granite-3.0-2b-instruct-pirate | pingel | 2024-10-28T03:28:43Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-28T03:27:43Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joe611/chickens-composite-101818181818-150-epochs-wo-transform | joe611 | 2024-10-28T03:26:12Z | 37 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-24T14:14:34Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: chickens-composite-101818181818-150-epochs-wo-transform
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chickens-composite-101818181818-150-epochs-wo-transform
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2920
- Map: 0.8077
- Map 50: 0.9477
- Map 75: 0.9158
- Map Small: 0.3091
- Map Medium: 0.8073
- Map Large: 0.8068
- Mar 1: 0.3412
- Mar 10: 0.8477
- Mar 100: 0.8503
- Mar Small: 0.3681
- Mar Medium: 0.8532
- Mar Large: 0.8416
- Map Chicken: 0.8113
- Mar 100 Chicken: 0.8536
- Map Duck: 0.7376
- Mar 100 Duck: 0.7884
- Map Plant: 0.8741
- Mar 100 Plant: 0.9087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Chicken | Map Duck | Map Large | Map Medium | Map Plant | Map Small | Mar 1 | Mar 10 | Mar 100 | Mar 100 Chicken | Mar 100 Duck | Mar 100 Plant | Mar Large | Mar Medium | Mar Small |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:-----------:|:--------:|:---------:|:----------:|:---------:|:---------:|:------:|:------:|:-------:|:---------------:|:------------:|:-------------:|:---------:|:----------:|:---------:|
| 1.4053 | 1.0 | 500 | 1.2477 | 0.2132 | 0.2977 | 0.2494 | 0.0752 | 0.0 | 0.2736 | 0.111 | 0.5644 | 0.0246 | 0.0896 | 0.344 | 0.4277 | 0.5375 | 0.0 | 0.7455 | 0.4384 | 0.3889 | 0.125 |
| 1.1026 | 2.0 | 1000 | 0.9474 | 0.3289 | 0.4761 | 0.3739 | 0.2886 | 0.0 | 0.3581 | 0.2865 | 0.6982 | 0.0533 | 0.1172 | 0.4404 | 0.453 | 0.5972 | 0.0 | 0.7617 | 0.4725 | 0.4198 | 0.075 |
| 0.9145 | 3.0 | 1500 | 0.8323 | 0.3774 | 0.5429 | 0.4501 | 0.4292 | 0.0 | 0.3887 | 0.3387 | 0.7029 | 0.056 | 0.1275 | 0.4604 | 0.4671 | 0.6444 | 0.0 | 0.7569 | 0.4759 | 0.434 | 0.1417 |
| 0.7969 | 4.0 | 2000 | 0.7482 | 0.4014 | 0.5722 | 0.4757 | 0.469 | 0.0 | 0.4236 | 0.3599 | 0.7351 | 0.0509 | 0.1308 | 0.4796 | 0.484 | 0.6673 | 0.0 | 0.7846 | 0.5137 | 0.4489 | 0.1 |
| 0.7362 | 5.0 | 2500 | 0.6636 | 0.4105 | 0.5683 | 0.4776 | 0.497 | 0.0 | 0.4218 | 0.37 | 0.7346 | 0.0401 | 0.1347 | 0.4905 | 0.4936 | 0.6996 | 0.0 | 0.7813 | 0.5035 | 0.4576 | 0.1042 |
| 0.7851 | 6.0 | 3000 | 0.6610 | 0.4002 | 0.5655 | 0.4674 | 0.4651 | 0.0 | 0.4233 | 0.3654 | 0.7355 | 0.0907 | 0.1327 | 0.4844 | 0.4895 | 0.6871 | 0.0 | 0.7813 | 0.5251 | 0.4573 | 0.125 |
| 0.6384 | 7.0 | 3500 | 0.6013 | 0.4291 | 0.5761 | 0.5129 | 0.5187 | 0.0 | 0.4431 | 0.4007 | 0.7686 | 0.1241 | 0.1398 | 0.5048 | 0.5076 | 0.7105 | 0.0 | 0.8123 | 0.5196 | 0.4804 | 0.175 |
| 0.6665 | 8.0 | 4000 | 0.5634 | 0.4512 | 0.6017 | 0.5271 | 0.5892 | 0.0 | 0.4484 | 0.4258 | 0.7643 | 0.1577 | 0.1449 | 0.5129 | 0.5176 | 0.7435 | 0.0 | 0.8093 | 0.5259 | 0.4958 | 0.2125 |
| 0.6543 | 9.0 | 4500 | 0.5299 | 0.4662 | 0.611 | 0.5561 | 0.6205 | 0.0005 | 0.4742 | 0.4369 | 0.7774 | 0.1541 | 0.146 | 0.5226 | 0.5262 | 0.7516 | 0.0021 | 0.825 | 0.5364 | 0.5003 | 0.1792 |
| 0.6318 | 10.0 | 5000 | 0.5264 | 0.4936 | 0.6349 | 0.5793 | 0.6865 | 0.0113 | 0.5046 | 0.4713 | 0.7831 | 0.1658 | 0.1576 | 0.5431 | 0.5473 | 0.779 | 0.0284 | 0.8343 | 0.5586 | 0.5251 | 0.2167 |
| 0.5645 | 11.0 | 5500 | 0.5428 | 0.4775 | 0.6367 | 0.5697 | 0.6568 | 0.0055 | 0.4917 | 0.4478 | 0.7701 | 0.1445 | 0.1543 | 0.5246 | 0.5273 | 0.7411 | 0.0242 | 0.8166 | 0.5456 | 0.4978 | 0.2042 |
| 0.5886 | 12.0 | 6000 | 0.5083 | 0.4952 | 0.6519 | 0.5862 | 0.6893 | 0.0087 | 0.5185 | 0.4623 | 0.7875 | 0.1526 | 0.161 | 0.5425 | 0.55 | 0.7633 | 0.0621 | 0.8247 | 0.5662 | 0.5207 | 0.2958 |
| 0.5457 | 13.0 | 6500 | 0.4834 | 0.5798 | 0.7671 | 0.7027 | 0.6983 | 0.256 | 0.562 | 0.5678 | 0.7851 | 0.137 | 0.2139 | 0.6144 | 0.6182 | 0.7484 | 0.2779 | 0.8283 | 0.5973 | 0.6102 | 0.2083 |
| 0.5674 | 14.0 | 7000 | 0.4656 | 0.6557 | 0.8508 | 0.7891 | 0.7219 | 0.4557 | 0.6289 | 0.649 | 0.7896 | 0.2017 | 0.2701 | 0.6958 | 0.6988 | 0.7685 | 0.4958 | 0.8319 | 0.6629 | 0.6961 | 0.3083 |
| 0.5682 | 15.0 | 7500 | 0.4718 | 0.6374 | 0.8625 | 0.7728 | 0.6638 | 0.4603 | 0.6314 | 0.6244 | 0.7882 | 0.1348 | 0.2676 | 0.682 | 0.6845 | 0.7234 | 0.5032 | 0.8268 | 0.6692 | 0.6795 | 0.175 |
| 0.4899 | 16.0 | 8000 | 0.4192 | 0.7045 | 0.9058 | 0.841 | 0.7302 | 0.5906 | 0.7083 | 0.6839 | 0.7927 | 0.1918 | 0.2982 | 0.7452 | 0.748 | 0.777 | 0.6316 | 0.8355 | 0.7423 | 0.734 | 0.2333 |
| 0.3987 | 17.0 | 8500 | 0.3887 | 0.7215 | 0.9245 | 0.8647 | 0.724 | 0.6257 | 0.7247 | 0.7142 | 0.8149 | 0.2032 | 0.3116 | 0.7643 | 0.7692 | 0.7722 | 0.6811 | 0.8545 | 0.7637 | 0.762 | 0.2861 |
| 0.4619 | 18.0 | 9000 | 0.4065 | 0.7078 | 0.9116 | 0.8648 | 0.729 | 0.5958 | 0.7406 | 0.6949 | 0.7987 | 0.1958 | 0.302 | 0.752 | 0.7545 | 0.777 | 0.6453 | 0.8413 | 0.777 | 0.7438 | 0.2458 |
| 0.4152 | 19.0 | 9500 | 0.4048 | 0.7007 | 0.9187 | 0.8475 | 0.7139 | 0.5803 | 0.7123 | 0.6925 | 0.8079 | 0.2193 | 0.2994 | 0.7487 | 0.7523 | 0.7645 | 0.6442 | 0.8482 | 0.7519 | 0.7481 | 0.2875 |
| 0.433 | 20.0 | 10000 | 0.3881 | 0.7122 | 0.8942 | 0.8579 | 0.7317 | 0.5846 | 0.6779 | 0.7154 | 0.8203 | 0.23 | 0.2993 | 0.7515 | 0.7553 | 0.7855 | 0.6221 | 0.8584 | 0.7165 | 0.7616 | 0.2958 |
| 0.4217 | 21.0 | 10500 | 0.3993 | 0.7098 | 0.9189 | 0.854 | 0.7091 | 0.6125 | 0.6944 | 0.7001 | 0.8078 | 0.1268 | 0.3026 | 0.7515 | 0.755 | 0.7621 | 0.6526 | 0.8503 | 0.7351 | 0.7505 | 0.1875 |
| 0.437 | 22.0 | 11000 | 0.3756 | 0.7397 | 0.9455 | 0.8872 | 0.7461 | 0.663 | 0.7388 | 0.7359 | 0.8099 | 0.3 | 0.3132 | 0.7856 | 0.7882 | 0.7839 | 0.7284 | 0.8524 | 0.7758 | 0.7878 | 0.3722 |
| 0.3823 | 23.0 | 11500 | 0.3662 | 0.7413 | 0.9302 | 0.8634 | 0.7438 | 0.6558 | 0.7313 | 0.7434 | 0.8243 | 0.2147 | 0.3125 | 0.7779 | 0.7815 | 0.7823 | 0.6979 | 0.8645 | 0.768 | 0.7838 | 0.2917 |
| 0.4099 | 24.0 | 12000 | 0.3782 | 0.7106 | 0.9135 | 0.837 | 0.7338 | 0.5835 | 0.7303 | 0.7055 | 0.8144 | 0.17 | 0.3035 | 0.7523 | 0.756 | 0.7823 | 0.6368 | 0.8488 | 0.7634 | 0.7529 | 0.2208 |
| 0.4394 | 25.0 | 12500 | 0.3560 | 0.7486 | 0.9406 | 0.9005 | 0.7494 | 0.6785 | 0.7588 | 0.7379 | 0.818 | 0.3099 | 0.3224 | 0.7901 | 0.7924 | 0.796 | 0.7242 | 0.8569 | 0.7922 | 0.7844 | 0.3792 |
| 0.4073 | 26.0 | 13000 | 0.3494 | 0.7445 | 0.9451 | 0.8774 | 0.7541 | 0.6621 | 0.7622 | 0.7349 | 0.8174 | 0.2726 | 0.3184 | 0.7848 | 0.7883 | 0.7944 | 0.7126 | 0.8578 | 0.8 | 0.7812 | 0.3361 |
| 0.3584 | 27.0 | 13500 | 0.3502 | 0.7466 | 0.929 | 0.8916 | 0.7425 | 0.6693 | 0.7625 | 0.7331 | 0.8279 | 0.1946 | 0.3176 | 0.7842 | 0.7892 | 0.7831 | 0.7168 | 0.8678 | 0.7998 | 0.7816 | 0.2583 |
| 0.3832 | 28.0 | 14000 | 0.3387 | 0.7665 | 0.9475 | 0.8894 | 0.7693 | 0.7016 | 0.7768 | 0.7563 | 0.8287 | 0.256 | 0.3242 | 0.8051 | 0.8091 | 0.8069 | 0.7516 | 0.869 | 0.8141 | 0.8013 | 0.3611 |
| 0.4091 | 29.0 | 14500 | 0.3726 | 0.7636 | 0.9538 | 0.8979 | 0.7684 | 0.6981 | 0.7624 | 0.7576 | 0.8243 | 0.3196 | 0.3197 | 0.7993 | 0.8036 | 0.8048 | 0.7442 | 0.8617 | 0.7988 | 0.7974 | 0.4361 |
| 0.3381 | 30.0 | 15000 | 0.3592 | 0.7447 | 0.9546 | 0.9023 | 0.7316 | 0.6702 | 0.7521 | 0.7442 | 0.8323 | 0.3173 | 0.3134 | 0.7879 | 0.7908 | 0.7742 | 0.7263 | 0.872 | 0.7925 | 0.7903 | 0.3958 |
| 0.3368 | 31.0 | 15500 | 0.3480 | 0.7512 | 0.9404 | 0.87 | 0.7578 | 0.6736 | 0.7649 | 0.7399 | 0.8222 | 0.3136 | 0.323 | 0.7971 | 0.8015 | 0.7996 | 0.7379 | 0.8669 | 0.8056 | 0.7943 | 0.425 |
| 0.3684 | 32.0 | 16000 | 0.3740 | 0.7176 | 0.9117 | 0.841 | 0.7508 | 0.5789 | 0.7257 | 0.7114 | 0.8233 | 0.2731 | 0.3027 | 0.7582 | 0.7646 | 0.7988 | 0.6295 | 0.8657 | 0.7583 | 0.7638 | 0.3708 |
| 0.3512 | 33.0 | 16500 | 0.3633 | 0.746 | 0.9453 | 0.8793 | 0.7502 | 0.6716 | 0.7449 | 0.7318 | 0.8162 | 0.2404 | 0.3147 | 0.783 | 0.7873 | 0.7875 | 0.7158 | 0.8587 | 0.7851 | 0.7773 | 0.2931 |
| 0.339 | 34.0 | 17000 | 0.3639 | 0.7315 | 0.9132 | 0.8556 | 0.7576 | 0.6165 | 0.7388 | 0.7248 | 0.8204 | 0.191 | 0.3135 | 0.7729 | 0.7766 | 0.7996 | 0.6674 | 0.863 | 0.7715 | 0.7757 | 0.2681 |
| 0.3473 | 35.0 | 17500 | 0.3505 | 0.7395 | 0.9401 | 0.8612 | 0.7276 | 0.6659 | 0.7272 | 0.7386 | 0.8248 | 0.3197 | 0.3182 | 0.7808 | 0.7871 | 0.7815 | 0.7116 | 0.8684 | 0.7637 | 0.7899 | 0.4139 |
| 0.3245 | 36.0 | 18000 | 0.3572 | 0.7362 | 0.941 | 0.869 | 0.7426 | 0.6396 | 0.7436 | 0.725 | 0.8263 | 0.2533 | 0.3179 | 0.7794 | 0.7844 | 0.798 | 0.6926 | 0.8627 | 0.788 | 0.7784 | 0.3181 |
| 0.3034 | 37.0 | 18500 | 0.3292 | 0.7672 | 0.9507 | 0.9006 | 0.7605 | 0.7016 | 0.7688 | 0.7618 | 0.8394 | 0.3576 | 0.3269 | 0.8055 | 0.8101 | 0.804 | 0.7453 | 0.881 | 0.8074 | 0.8081 | 0.4722 |
| 0.3034 | 38.0 | 19000 | 0.3557 | 0.752 | 0.9405 | 0.8849 | 0.7566 | 0.6644 | 0.7595 | 0.7431 | 0.8351 | 0.1968 | 0.323 | 0.7902 | 0.7963 | 0.804 | 0.7147 | 0.8702 | 0.7997 | 0.7888 | 0.2806 |
| 0.3203 | 39.0 | 19500 | 0.3412 | 0.7583 | 0.936 | 0.881 | 0.7689 | 0.6778 | 0.7761 | 0.7493 | 0.8282 | 0.238 | 0.3233 | 0.8044 | 0.8074 | 0.8137 | 0.7389 | 0.8696 | 0.8151 | 0.802 | 0.3167 |
| 0.3294 | 40.0 | 20000 | 0.3277 | 0.7661 | 0.9502 | 0.9104 | 0.7762 | 0.6952 | 0.7706 | 0.7589 | 0.8268 | 0.2645 | 0.3239 | 0.8081 | 0.8143 | 0.8234 | 0.7505 | 0.869 | 0.8173 | 0.8087 | 0.3597 |
| 0.3371 | 41.0 | 20500 | 0.3199 | 0.7796 | 0.9531 | 0.9052 | 0.7818 | 0.7261 | 0.7832 | 0.7775 | 0.8309 | 0.2026 | 0.3321 | 0.8172 | 0.8244 | 0.8262 | 0.7779 | 0.869 | 0.8192 | 0.8216 | 0.3097 |
| 0.3152 | 42.0 | 21000 | 0.3230 | 0.7752 | 0.9505 | 0.9004 | 0.777 | 0.7122 | 0.7646 | 0.7776 | 0.8363 | 0.2606 | 0.3261 | 0.8152 | 0.8197 | 0.821 | 0.76 | 0.878 | 0.8078 | 0.8211 | 0.3556 |
| 0.2923 | 43.0 | 21500 | 0.3489 | 0.7628 | 0.9461 | 0.9041 | 0.7626 | 0.6946 | 0.7674 | 0.756 | 0.8311 | 0.1877 | 0.3223 | 0.8015 | 0.8074 | 0.8056 | 0.7495 | 0.8672 | 0.8105 | 0.8008 | 0.2806 |
| 0.2911 | 44.0 | 22000 | 0.3462 | 0.7604 | 0.9551 | 0.902 | 0.7409 | 0.704 | 0.7573 | 0.7514 | 0.8362 | 0.2793 | 0.3231 | 0.7995 | 0.8035 | 0.7847 | 0.7547 | 0.8711 | 0.8025 | 0.7941 | 0.3542 |
| 0.3483 | 45.0 | 22500 | 0.3127 | 0.7808 | 0.949 | 0.9075 | 0.7884 | 0.7041 | 0.7737 | 0.781 | 0.8497 | 0.1944 | 0.3301 | 0.8204 | 0.8267 | 0.827 | 0.7716 | 0.8816 | 0.8127 | 0.8258 | 0.3458 |
| 0.2995 | 46.0 | 23000 | 0.3279 | 0.7731 | 0.945 | 0.9019 | 0.7756 | 0.7061 | 0.7637 | 0.773 | 0.8376 | 0.3203 | 0.3282 | 0.8118 | 0.8157 | 0.8181 | 0.7537 | 0.8753 | 0.8122 | 0.8145 | 0.3875 |
| 0.3051 | 47.0 | 23500 | 0.3230 | 0.7755 | 0.9534 | 0.8973 | 0.7728 | 0.7064 | 0.7557 | 0.7788 | 0.8471 | 0.3002 | 0.326 | 0.8161 | 0.8203 | 0.8141 | 0.7642 | 0.8825 | 0.8009 | 0.8228 | 0.3903 |
| 0.3151 | 48.0 | 24000 | 0.3249 | 0.7727 | 0.9556 | 0.8934 | 0.7583 | 0.7109 | 0.758 | 0.7667 | 0.8488 | 0.2545 | 0.3243 | 0.8145 | 0.8188 | 0.8052 | 0.7695 | 0.8816 | 0.8023 | 0.8114 | 0.3583 |
| 0.2858 | 49.0 | 24500 | 0.3337 | 0.7602 | 0.9435 | 0.8897 | 0.7603 | 0.6782 | 0.7742 | 0.7478 | 0.8421 | 0.2511 | 0.3254 | 0.801 | 0.8056 | 0.8077 | 0.7295 | 0.8795 | 0.8134 | 0.7968 | 0.3528 |
| 0.2805 | 50.0 | 25000 | 0.3290 | 0.771 | 0.9535 | 0.8821 | 0.7626 | 0.6968 | 0.7587 | 0.7754 | 0.8534 | 0.25 | 0.3266 | 0.8125 | 0.8168 | 0.8105 | 0.7526 | 0.8873 | 0.8029 | 0.8227 | 0.3403 |
| 0.3061 | 51.0 | 25500 | 0.3256 | 0.7722 | 0.9437 | 0.8933 | 0.7766 | 0.6889 | 0.7676 | 0.7686 | 0.8511 | 0.2907 | 0.3235 | 0.8103 | 0.8128 | 0.819 | 0.7347 | 0.8846 | 0.8068 | 0.8132 | 0.3458 |
| 0.3193 | 52.0 | 26000 | 0.3298 | 0.7714 | 0.9492 | 0.902 | 0.7804 | 0.695 | 0.7758 | 0.7642 | 0.8389 | 0.2703 | 0.3251 | 0.8076 | 0.8107 | 0.8218 | 0.7368 | 0.8735 | 0.8187 | 0.8049 | 0.3375 |
| 0.2874 | 53.0 | 26500 | 0.3233 | 0.7678 | 0.9355 | 0.8823 | 0.7867 | 0.6681 | 0.7805 | 0.762 | 0.8485 | 0.2712 | 0.3252 | 0.808 | 0.811 | 0.8266 | 0.7232 | 0.8831 | 0.8179 | 0.8097 | 0.3347 |
| 0.274 | 54.0 | 27000 | 0.3293 | 0.7717 | 0.9471 | 0.9046 | 0.764 | 0.7013 | 0.7675 | 0.7685 | 0.8498 | 0.2921 | 0.3274 | 0.8088 | 0.8121 | 0.8109 | 0.7442 | 0.8813 | 0.8026 | 0.8127 | 0.3736 |
| 0.3281 | 55.0 | 27500 | 0.3472 | 0.7735 | 0.9504 | 0.9009 | 0.7682 | 0.6983 | 0.7653 | 0.7668 | 0.8541 | 0.3038 | 0.3281 | 0.8107 | 0.8139 | 0.8105 | 0.7453 | 0.8858 | 0.8027 | 0.8109 | 0.3889 |
| 0.2792 | 56.0 | 28000 | 0.3254 | 0.7822 | 0.9473 | 0.9008 | 0.7724 | 0.7218 | 0.788 | 0.7787 | 0.8523 | 0.2316 | 0.3338 | 0.8169 | 0.8217 | 0.8185 | 0.7621 | 0.8843 | 0.8252 | 0.8197 | 0.3208 |
| 0.2993 | 57.0 | 28500 | 0.3367 | 0.7803 | 0.9461 | 0.9146 | 0.7651 | 0.7266 | 0.7959 | 0.7758 | 0.8493 | 0.3025 | 0.3289 | 0.8147 | 0.8221 | 0.8121 | 0.7705 | 0.8837 | 0.8277 | 0.8184 | 0.4361 |
| 0.3099 | 58.0 | 29000 | 0.3266 | 0.7741 | 0.9413 | 0.9049 | 0.7669 | 0.6962 | 0.7867 | 0.7667 | 0.859 | 0.2785 | 0.3255 | 0.8114 | 0.8182 | 0.8157 | 0.7495 | 0.8895 | 0.8228 | 0.8111 | 0.3903 |
| 0.3006 | 59.0 | 29500 | 0.3277 | 0.7811 | 0.9463 | 0.8999 | 0.767 | 0.7194 | 0.7902 | 0.7779 | 0.8569 | 0.2351 | 0.3316 | 0.8173 | 0.8203 | 0.8052 | 0.7674 | 0.8883 | 0.8225 | 0.819 | 0.3264 |
| 0.293 | 60.0 | 30000 | 0.3289 | 0.775 | 0.9453 | 0.9004 | 0.7754 | 0.7093 | 0.7913 | 0.7725 | 0.8403 | 0.3003 | 0.3296 | 0.8108 | 0.8155 | 0.8214 | 0.7463 | 0.8789 | 0.8294 | 0.8104 | 0.4042 |
| 0.293 | 61.0 | 30500 | 0.3381 | 0.7689 | 0.944 | 0.9 | 0.7733 | 0.6846 | 0.7807 | 0.7619 | 0.8488 | 0.2801 | 0.3265 | 0.8021 | 0.8097 | 0.8121 | 0.7337 | 0.8834 | 0.8088 | 0.8084 | 0.3472 |
| 0.3054 | 62.0 | 31000 | 0.3289 | 0.7743 | 0.9535 | 0.8912 | 0.7672 | 0.7019 | 0.7817 | 0.7676 | 0.8538 | 0.3075 | 0.3307 | 0.8142 | 0.8184 | 0.8125 | 0.7568 | 0.8858 | 0.8138 | 0.8166 | 0.3931 |
| 0.2988 | 63.0 | 31500 | 0.3311 | 0.7717 | 0.9403 | 0.8903 | 0.7761 | 0.6895 | 0.7881 | 0.7662 | 0.8496 | 0.3147 | 0.332 | 0.8132 | 0.8155 | 0.8177 | 0.7442 | 0.8846 | 0.8234 | 0.8121 | 0.3958 |
| 0.292 | 64.0 | 32000 | 0.3342 | 0.7766 | 0.9481 | 0.9092 | 0.7738 | 0.7078 | 0.7886 | 0.7735 | 0.8482 | 0.3001 | 0.3296 | 0.8156 | 0.8196 | 0.8181 | 0.7579 | 0.8828 | 0.8229 | 0.8182 | 0.3792 |
| 0.2722 | 65.0 | 32500 | 0.3164 | 0.7786 | 0.9492 | 0.9022 | 0.7827 | 0.6865 | 0.7771 | 0.7759 | 0.8665 | 0.3049 | 0.3279 | 0.8165 | 0.8227 | 0.8306 | 0.7411 | 0.8964 | 0.8132 | 0.8217 | 0.3972 |
| 0.2741 | 66.0 | 33000 | 0.3280 | 0.7785 | 0.9444 | 0.8962 | 0.7717 | 0.7064 | 0.7973 | 0.7746 | 0.8573 | 0.2521 | 0.3303 | 0.8219 | 0.8262 | 0.8202 | 0.7642 | 0.8943 | 0.8314 | 0.8253 | 0.3528 |
| 0.2583 | 67.0 | 33500 | 0.3298 | 0.7684 | 0.9435 | 0.911 | 0.7665 | 0.6877 | 0.7855 | 0.7589 | 0.8511 | 0.2861 | 0.3227 | 0.8092 | 0.8128 | 0.8153 | 0.7368 | 0.8861 | 0.8197 | 0.8066 | 0.3792 |
| 0.2687 | 68.0 | 34000 | 0.3116 | 0.7854 | 0.9422 | 0.9099 | 0.7809 | 0.7136 | 0.7954 | 0.7815 | 0.8617 | 0.3415 | 0.3333 | 0.8256 | 0.8306 | 0.8246 | 0.7705 | 0.8967 | 0.8299 | 0.8304 | 0.4347 |
| 0.2551 | 69.0 | 34500 | 0.3138 | 0.7862 | 0.9438 | 0.9003 | 0.7851 | 0.7196 | 0.7939 | 0.7841 | 0.8538 | 0.252 | 0.3329 | 0.8226 | 0.8268 | 0.8335 | 0.7589 | 0.888 | 0.8324 | 0.8257 | 0.3347 |
| 0.2599 | 70.0 | 35000 | 0.3066 | 0.7808 | 0.9454 | 0.9049 | 0.7823 | 0.7096 | 0.7951 | 0.7726 | 0.8506 | 0.2996 | 0.3309 | 0.8205 | 0.8238 | 0.8274 | 0.7589 | 0.8849 | 0.8362 | 0.8197 | 0.3528 |
| 0.2557 | 71.0 | 35500 | 0.3199 | 0.7818 | 0.9501 | 0.9122 | 0.7759 | 0.7061 | 0.7918 | 0.7726 | 0.8635 | 0.2611 | 0.3298 | 0.8214 | 0.8246 | 0.8202 | 0.76 | 0.8937 | 0.8264 | 0.8182 | 0.3569 |
| 0.2659 | 72.0 | 36000 | 0.3060 | 0.7937 | 0.9493 | 0.9115 | 0.7909 | 0.7209 | 0.7933 | 0.7942 | 0.8693 | 0.3496 | 0.3357 | 0.8295 | 0.8333 | 0.8331 | 0.7663 | 0.9006 | 0.8241 | 0.8341 | 0.4222 |
| 0.2815 | 73.0 | 36500 | 0.3145 | 0.783 | 0.9491 | 0.9155 | 0.7768 | 0.7141 | 0.7839 | 0.7768 | 0.8579 | 0.3012 | 0.329 | 0.8165 | 0.8214 | 0.819 | 0.7568 | 0.8886 | 0.8181 | 0.8173 | 0.3792 |
| 0.2303 | 74.0 | 37000 | 0.3187 | 0.7823 | 0.9535 | 0.9207 | 0.7765 | 0.7248 | 0.7897 | 0.7723 | 0.8457 | 0.3227 | 0.3288 | 0.8211 | 0.8247 | 0.8246 | 0.7684 | 0.881 | 0.8234 | 0.8153 | 0.4236 |
| 0.2871 | 75.0 | 37500 | 0.3217 | 0.7787 | 0.9401 | 0.906 | 0.771 | 0.7157 | 0.7772 | 0.7782 | 0.8493 | 0.3092 | 0.3292 | 0.8229 | 0.8263 | 0.8214 | 0.7684 | 0.8892 | 0.8157 | 0.8304 | 0.4028 |
| 0.2432 | 76.0 | 38000 | 0.3139 | 0.7759 | 0.94 | 0.9042 | 0.7785 | 0.6999 | 0.7796 | 0.7728 | 0.8491 | 0.2778 | 0.3292 | 0.8176 | 0.8212 | 0.8234 | 0.7516 | 0.8886 | 0.8145 | 0.8236 | 0.3361 |
| 0.2781 | 77.0 | 38500 | 0.3370 | 0.7739 | 0.9349 | 0.8953 | 0.7758 | 0.6934 | 0.7756 | 0.7738 | 0.8524 | 0.3185 | 0.3297 | 0.816 | 0.8184 | 0.8214 | 0.7421 | 0.8919 | 0.8123 | 0.8188 | 0.3875 |
| 0.2527 | 78.0 | 39000 | 0.3022 | 0.7879 | 0.9393 | 0.9096 | 0.7889 | 0.7108 | 0.7881 | 0.7877 | 0.8641 | 0.2717 | 0.3349 | 0.8261 | 0.8289 | 0.8355 | 0.7537 | 0.8976 | 0.8242 | 0.8304 | 0.3542 |
| 0.2705 | 79.0 | 39500 | 0.3070 | 0.7803 | 0.9486 | 0.9085 | 0.2776 | 0.7774 | 0.7799 | 0.3326 | 0.8234 | 0.8259 | 0.3333 | 0.8255 | 0.8194 | 0.7755 | 0.823 | 0.7104 | 0.7621 | 0.855 | 0.8925 |
| 0.2275 | 80.0 | 40000 | 0.3107 | 0.7739 | 0.9315 | 0.8937 | 0.2574 | 0.7743 | 0.7726 | 0.3256 | 0.815 | 0.8186 | 0.3278 | 0.8189 | 0.8164 | 0.7871 | 0.8335 | 0.6775 | 0.7284 | 0.8573 | 0.894 |
| 0.2616 | 81.0 | 40500 | 0.3109 | 0.7855 | 0.941 | 0.9013 | 0.3217 | 0.7877 | 0.7872 | 0.333 | 0.8254 | 0.8286 | 0.3903 | 0.8308 | 0.825 | 0.7883 | 0.8294 | 0.7026 | 0.7558 | 0.8655 | 0.9006 |
| 0.2355 | 82.0 | 41000 | 0.3161 | 0.7823 | 0.9425 | 0.8978 | 0.2979 | 0.7793 | 0.78 | 0.3332 | 0.8228 | 0.8274 | 0.3875 | 0.827 | 0.8129 | 0.792 | 0.8351 | 0.6955 | 0.7505 | 0.8594 | 0.8967 |
| 0.252 | 83.0 | 41500 | 0.3297 | 0.7764 | 0.9371 | 0.9031 | 0.2282 | 0.7748 | 0.776 | 0.3291 | 0.8189 | 0.8222 | 0.3236 | 0.8247 | 0.8133 | 0.7746 | 0.8218 | 0.6971 | 0.7495 | 0.8577 | 0.8955 |
| 0.2559 | 84.0 | 42000 | 0.3112 | 0.7837 | 0.931 | 0.8905 | 0.2496 | 0.7793 | 0.788 | 0.3368 | 0.8241 | 0.8267 | 0.3083 | 0.8267 | 0.8234 | 0.7946 | 0.8343 | 0.6957 | 0.7495 | 0.8607 | 0.8964 |
| 0.23 | 85.0 | 42500 | 0.3214 | 0.7768 | 0.9357 | 0.8973 | 0.2582 | 0.7721 | 0.7927 | 0.3265 | 0.8163 | 0.8201 | 0.3583 | 0.8189 | 0.8206 | 0.7768 | 0.8242 | 0.6963 | 0.74 | 0.8574 | 0.8961 |
| 0.211 | 86.0 | 43000 | 0.3098 | 0.7871 | 0.9396 | 0.901 | 0.3 | 0.7813 | 0.7916 | 0.3318 | 0.8257 | 0.8289 | 0.3764 | 0.8275 | 0.8251 | 0.7872 | 0.8327 | 0.7158 | 0.7589 | 0.8583 | 0.8952 |
| 0.2234 | 87.0 | 43500 | 0.2975 | 0.8018 | 0.9482 | 0.9091 | 0.3233 | 0.7983 | 0.805 | 0.3387 | 0.8387 | 0.8425 | 0.4181 | 0.8408 | 0.8362 | 0.8003 | 0.8403 | 0.7399 | 0.7853 | 0.8652 | 0.9018 |
| 0.2361 | 88.0 | 44000 | 0.3144 | 0.7859 | 0.9288 | 0.8954 | 0.2457 | 0.7859 | 0.7829 | 0.33 | 0.8247 | 0.8282 | 0.3167 | 0.8301 | 0.8197 | 0.8026 | 0.8464 | 0.6983 | 0.7442 | 0.8568 | 0.894 |
| 0.2305 | 89.0 | 44500 | 0.3158 | 0.8005 | 0.9495 | 0.9093 | 0.2671 | 0.8012 | 0.79 | 0.3364 | 0.8388 | 0.8416 | 0.3361 | 0.8459 | 0.8235 | 0.7997 | 0.8395 | 0.7393 | 0.7874 | 0.8626 | 0.8979 |
| 0.2235 | 90.0 | 45000 | 0.3080 | 0.7924 | 0.9411 | 0.9083 | 0.2627 | 0.7869 | 0.7957 | 0.3358 | 0.8323 | 0.8366 | 0.3611 | 0.837 | 0.8321 | 0.7875 | 0.8331 | 0.7227 | 0.7737 | 0.867 | 0.903 |
| 0.2371 | 91.0 | 45500 | 0.3000 | 0.7932 | 0.9547 | 0.9102 | 0.307 | 0.7927 | 0.7888 | 0.3305 | 0.834 | 0.838 | 0.3736 | 0.8412 | 0.8265 | 0.7979 | 0.8351 | 0.7177 | 0.7779 | 0.864 | 0.9009 |
| 0.2136 | 92.0 | 46000 | 0.3052 | 0.7917 | 0.9531 | 0.9044 | 0.3165 | 0.7878 | 0.7869 | 0.3339 | 0.8328 | 0.8356 | 0.4056 | 0.8368 | 0.8202 | 0.7917 | 0.8343 | 0.7304 | 0.7789 | 0.8529 | 0.8937 |
| 0.2609 | 93.0 | 46500 | 0.3135 | 0.7798 | 0.933 | 0.8879 | 0.3041 | 0.7746 | 0.7992 | 0.3315 | 0.8231 | 0.8266 | 0.3569 | 0.8259 | 0.8365 | 0.7878 | 0.8343 | 0.6877 | 0.7442 | 0.864 | 0.9012 |
| 0.2258 | 94.0 | 47000 | 0.2984 | 0.7982 | 0.9455 | 0.9074 | 0.308 | 0.7923 | 0.7977 | 0.341 | 0.839 | 0.8418 | 0.3847 | 0.8413 | 0.8305 | 0.802 | 0.8464 | 0.7331 | 0.7832 | 0.8595 | 0.8958 |
| 0.2249 | 95.0 | 47500 | 0.3127 | 0.7872 | 0.9452 | 0.9067 | 0.2976 | 0.7879 | 0.7895 | 0.3353 | 0.8316 | 0.8349 | 0.375 | 0.8363 | 0.8277 | 0.7907 | 0.8375 | 0.7184 | 0.7737 | 0.8527 | 0.8934 |
| 0.2353 | 96.0 | 48000 | 0.3098 | 0.7896 | 0.9428 | 0.9046 | 0.2852 | 0.7846 | 0.7905 | 0.3345 | 0.8299 | 0.8338 | 0.3681 | 0.834 | 0.8252 | 0.7922 | 0.8375 | 0.717 | 0.7663 | 0.8596 | 0.8976 |
| 0.2361 | 97.0 | 48500 | 0.3121 | 0.7897 | 0.9378 | 0.8908 | 0.2774 | 0.7896 | 0.7944 | 0.3377 | 0.8317 | 0.835 | 0.3486 | 0.8377 | 0.8316 | 0.7923 | 0.8379 | 0.7171 | 0.7674 | 0.8596 | 0.8997 |
| 0.2273 | 98.0 | 49000 | 0.2956 | 0.7965 | 0.9503 | 0.903 | 0.3052 | 0.7928 | 0.7924 | 0.338 | 0.8425 | 0.8451 | 0.4028 | 0.8467 | 0.8306 | 0.7937 | 0.8419 | 0.7282 | 0.7905 | 0.8676 | 0.9027 |
| 0.2461 | 99.0 | 49500 | 0.3038 | 0.7906 | 0.9376 | 0.8969 | 0.3125 | 0.7844 | 0.796 | 0.3367 | 0.8322 | 0.8346 | 0.3889 | 0.8342 | 0.8332 | 0.7937 | 0.8399 | 0.718 | 0.7653 | 0.8603 | 0.8985 |
| 0.2195 | 100.0 | 50000 | 0.2938 | 0.7953 | 0.9426 | 0.8928 | 0.3242 | 0.7928 | 0.7988 | 0.3396 | 0.8379 | 0.8402 | 0.4069 | 0.8414 | 0.8343 | 0.8022 | 0.8472 | 0.7161 | 0.7695 | 0.8676 | 0.9039 |
| 0.2093 | 101.0 | 50500 | 0.3043 | 0.7953 | 0.9499 | 0.9023 | 0.3206 | 0.7908 | 0.7915 | 0.3373 | 0.837 | 0.8409 | 0.4042 | 0.8421 | 0.8302 | 0.7931 | 0.8415 | 0.7238 | 0.7779 | 0.869 | 0.9033 |
| 0.2161 | 102.0 | 51000 | 0.3034 | 0.7945 | 0.9405 | 0.9057 | 0.3314 | 0.79 | 0.8063 | 0.3401 | 0.8365 | 0.8393 | 0.4139 | 0.8369 | 0.8437 | 0.7972 | 0.846 | 0.724 | 0.7705 | 0.8624 | 0.9015 |
| 0.219 | 103.0 | 51500 | 0.2984 | 0.7964 | 0.9511 | 0.9086 | 0.3009 | 0.7991 | 0.7869 | 0.338 | 0.8375 | 0.842 | 0.3792 | 0.8462 | 0.8272 | 0.7948 | 0.8435 | 0.7312 | 0.7821 | 0.8631 | 0.9003 |
| 0.2472 | 104.0 | 52000 | 0.3095 | 0.7956 | 0.9511 | 0.9115 | 0.2932 | 0.7917 | 0.8018 | 0.3359 | 0.8359 | 0.8397 | 0.4 | 0.8382 | 0.8348 | 0.7952 | 0.8383 | 0.7274 | 0.78 | 0.8644 | 0.9009 |
| 0.2457 | 105.0 | 52500 | 0.3061 | 0.7929 | 0.9371 | 0.9027 | 0.2809 | 0.784 | 0.8069 | 0.3377 | 0.8327 | 0.8359 | 0.3417 | 0.8316 | 0.8457 | 0.8015 | 0.8448 | 0.7138 | 0.7653 | 0.8634 | 0.8976 |
| 0.2017 | 106.0 | 53000 | 0.3000 | 0.8017 | 0.9475 | 0.9114 | 0.2991 | 0.7983 | 0.8011 | 0.3372 | 0.8439 | 0.8466 | 0.3764 | 0.8475 | 0.8398 | 0.8056 | 0.8496 | 0.7309 | 0.7863 | 0.8687 | 0.9039 |
| 0.2055 | 107.0 | 53500 | 0.2946 | 0.8083 | 0.9481 | 0.9106 | 0.2911 | 0.8083 | 0.7989 | 0.3434 | 0.8477 | 0.8513 | 0.3667 | 0.8562 | 0.8344 | 0.8141 | 0.8536 | 0.7396 | 0.7947 | 0.8713 | 0.9054 |
| 0.221 | 108.0 | 54000 | 0.2895 | 0.8036 | 0.9508 | 0.9127 | 0.3249 | 0.8017 | 0.7996 | 0.3407 | 0.8445 | 0.8478 | 0.4083 | 0.849 | 0.8383 | 0.8034 | 0.8464 | 0.7377 | 0.7937 | 0.8696 | 0.9033 |
| 0.2018 | 109.0 | 54500 | 0.2988 | 0.8 | 0.9398 | 0.9075 | 0.265 | 0.7982 | 0.8017 | 0.3413 | 0.8414 | 0.8438 | 0.3292 | 0.8465 | 0.8359 | 0.8103 | 0.8516 | 0.7241 | 0.78 | 0.8656 | 0.8997 |
| 0.2116 | 110.0 | 55000 | 0.2982 | 0.8012 | 0.9461 | 0.9134 | 0.3103 | 0.7977 | 0.8061 | 0.3412 | 0.843 | 0.8453 | 0.3722 | 0.8487 | 0.8362 | 0.8007 | 0.8419 | 0.7373 | 0.7937 | 0.8655 | 0.9003 |
| 0.2151 | 111.0 | 55500 | 0.2987 | 0.8005 | 0.9404 | 0.8995 | 0.2806 | 0.799 | 0.8037 | 0.3428 | 0.8413 | 0.8449 | 0.3514 | 0.8491 | 0.837 | 0.8043 | 0.85 | 0.7294 | 0.7842 | 0.8676 | 0.9006 |
| 0.2168 | 112.0 | 56000 | 0.2926 | 0.8076 | 0.9475 | 0.9101 | 0.2939 | 0.804 | 0.8077 | 0.3404 | 0.8471 | 0.8506 | 0.3681 | 0.8533 | 0.8372 | 0.808 | 0.8476 | 0.7416 | 0.7968 | 0.8732 | 0.9072 |
| 0.2245 | 113.0 | 56500 | 0.2975 | 0.7991 | 0.9446 | 0.9068 | 0.2872 | 0.7977 | 0.797 | 0.335 | 0.8404 | 0.8435 | 0.3458 | 0.8469 | 0.8342 | 0.8001 | 0.8423 | 0.7274 | 0.7832 | 0.8698 | 0.9051 |
| 0.235 | 114.0 | 57000 | 0.2973 | 0.8075 | 0.9449 | 0.908 | 0.3041 | 0.8058 | 0.8076 | 0.3413 | 0.8483 | 0.8517 | 0.375 | 0.8554 | 0.8403 | 0.8073 | 0.8512 | 0.7428 | 0.7968 | 0.8724 | 0.9069 |
| 0.2008 | 115.0 | 57500 | 0.3147 | 0.8024 | 0.9466 | 0.9154 | 0.298 | 0.7987 | 0.8062 | 0.3396 | 0.8438 | 0.8464 | 0.3611 | 0.847 | 0.8412 | 0.7963 | 0.8423 | 0.7436 | 0.7958 | 0.8671 | 0.9012 |
| 0.2225 | 116.0 | 58000 | 0.2958 | 0.802 | 0.9409 | 0.9039 | 0.2781 | 0.8002 | 0.8078 | 0.3399 | 0.8459 | 0.8488 | 0.3444 | 0.8512 | 0.8418 | 0.8009 | 0.8464 | 0.7359 | 0.7958 | 0.8691 | 0.9042 |
| 0.1969 | 117.0 | 58500 | 0.2989 | 0.8062 | 0.9446 | 0.9094 | 0.2932 | 0.8073 | 0.8038 | 0.34 | 0.846 | 0.8489 | 0.3639 | 0.8523 | 0.841 | 0.8083 | 0.8528 | 0.74 | 0.7895 | 0.8702 | 0.9045 |
| 0.2242 | 118.0 | 59000 | 0.2910 | 0.8075 | 0.9496 | 0.9127 | 0.2838 | 0.805 | 0.8004 | 0.3422 | 0.8481 | 0.8508 | 0.3597 | 0.8513 | 0.8397 | 0.8093 | 0.854 | 0.7413 | 0.7937 | 0.8719 | 0.9048 |
| 0.2193 | 119.0 | 59500 | 0.2938 | 0.8062 | 0.9437 | 0.9114 | 0.2695 | 0.8073 | 0.8067 | 0.3416 | 0.848 | 0.8511 | 0.3403 | 0.8555 | 0.843 | 0.8109 | 0.8548 | 0.7422 | 0.7968 | 0.8656 | 0.9015 |
| 0.1929 | 120.0 | 60000 | 0.2947 | 0.8036 | 0.9466 | 0.9071 | 0.2826 | 0.8013 | 0.8005 | 0.3405 | 0.845 | 0.8482 | 0.3597 | 0.8502 | 0.8404 | 0.8025 | 0.848 | 0.7364 | 0.7905 | 0.8718 | 0.906 |
| 0.234 | 121.0 | 60500 | 0.2966 | 0.8051 | 0.9469 | 0.9099 | 0.2836 | 0.8044 | 0.8052 | 0.3415 | 0.8458 | 0.8485 | 0.3472 | 0.8516 | 0.841 | 0.8082 | 0.8504 | 0.7361 | 0.7895 | 0.8709 | 0.9057 |
| 0.2487 | 122.0 | 61000 | 0.2978 | 0.8028 | 0.9442 | 0.9099 | 0.3001 | 0.8003 | 0.8034 | 0.3393 | 0.8427 | 0.8455 | 0.3681 | 0.848 | 0.8394 | 0.8041 | 0.8464 | 0.7369 | 0.7884 | 0.8672 | 0.9018 |
| 0.2144 | 123.0 | 61500 | 0.3070 | 0.8028 | 0.9434 | 0.9065 | 0.2893 | 0.8061 | 0.7976 | 0.3389 | 0.845 | 0.8474 | 0.3486 | 0.8535 | 0.8387 | 0.7999 | 0.8452 | 0.7387 | 0.7926 | 0.8698 | 0.9045 |
| 0.2 | 124.0 | 62000 | 0.3050 | 0.8024 | 0.9412 | 0.9047 | 0.2815 | 0.8016 | 0.8082 | 0.341 | 0.8424 | 0.8449 | 0.3264 | 0.8467 | 0.8447 | 0.8076 | 0.8504 | 0.7302 | 0.7811 | 0.8695 | 0.9033 |
| 0.216 | 125.0 | 62500 | 0.2965 | 0.8061 | 0.9436 | 0.9075 | 0.3028 | 0.8071 | 0.8086 | 0.342 | 0.8471 | 0.8496 | 0.3597 | 0.8543 | 0.8407 | 0.8073 | 0.8504 | 0.7383 | 0.7916 | 0.8728 | 0.9069 |
| 0.2556 | 126.0 | 63000 | 0.3021 | 0.7961 | 0.9403 | 0.9039 | 0.2741 | 0.7967 | 0.7992 | 0.3378 | 0.8386 | 0.8413 | 0.3389 | 0.8435 | 0.839 | 0.7994 | 0.8448 | 0.7243 | 0.7768 | 0.8645 | 0.9024 |
| 0.212 | 127.0 | 63500 | 0.3013 | 0.8022 | 0.9435 | 0.9073 | 0.2989 | 0.7997 | 0.809 | 0.3371 | 0.8421 | 0.8456 | 0.3639 | 0.8472 | 0.8447 | 0.8029 | 0.8448 | 0.7313 | 0.7853 | 0.8724 | 0.9069 |
| 0.2067 | 128.0 | 64000 | 0.3011 | 0.8016 | 0.9445 | 0.9093 | 0.2706 | 0.8003 | 0.8064 | 0.3377 | 0.843 | 0.8455 | 0.325 | 0.8487 | 0.8421 | 0.8046 | 0.8468 | 0.7339 | 0.7863 | 0.8663 | 0.9033 |
| 0.2224 | 129.0 | 64500 | 0.3004 | 0.8021 | 0.9448 | 0.91 | 0.2758 | 0.8017 | 0.8014 | 0.3393 | 0.8433 | 0.846 | 0.3347 | 0.8498 | 0.8375 | 0.8056 | 0.8488 | 0.7294 | 0.7832 | 0.8714 | 0.906 |
| 0.2127 | 130.0 | 65000 | 0.3002 | 0.8015 | 0.9417 | 0.9065 | 0.3047 | 0.8002 | 0.8029 | 0.3385 | 0.8422 | 0.845 | 0.3694 | 0.8462 | 0.8412 | 0.8059 | 0.8496 | 0.7281 | 0.7789 | 0.8705 | 0.9063 |
| 0.2123 | 131.0 | 65500 | 0.2958 | 0.8062 | 0.9469 | 0.9108 | 0.3028 | 0.8056 | 0.8099 | 0.3402 | 0.8479 | 0.8505 | 0.3778 | 0.8542 | 0.8448 | 0.8095 | 0.8536 | 0.7341 | 0.7884 | 0.8749 | 0.9093 |
| 0.2066 | 132.0 | 66000 | 0.2969 | 0.8044 | 0.9469 | 0.9015 | 0.3025 | 0.8018 | 0.806 | 0.3398 | 0.8456 | 0.8485 | 0.3778 | 0.8499 | 0.8416 | 0.8096 | 0.8528 | 0.7301 | 0.7842 | 0.8733 | 0.9084 |
| 0.2252 | 133.0 | 66500 | 0.2954 | 0.8052 | 0.9468 | 0.9083 | 0.3014 | 0.8039 | 0.8042 | 0.3398 | 0.8467 | 0.8492 | 0.3694 | 0.8511 | 0.8418 | 0.8066 | 0.8508 | 0.7358 | 0.7884 | 0.8733 | 0.9084 |
| 0.2008 | 134.0 | 67000 | 0.2946 | 0.806 | 0.9471 | 0.9108 | 0.3066 | 0.8033 | 0.8064 | 0.3403 | 0.8473 | 0.8499 | 0.3778 | 0.8507 | 0.8432 | 0.8082 | 0.8512 | 0.7386 | 0.7916 | 0.871 | 0.9069 |
| 0.2195 | 135.0 | 67500 | 0.2956 | 0.8044 | 0.9439 | 0.9084 | 0.3235 | 0.8027 | 0.8062 | 0.3408 | 0.8446 | 0.8471 | 0.3889 | 0.8487 | 0.8405 | 0.81 | 0.854 | 0.7303 | 0.7789 | 0.8731 | 0.9084 |
| 0.2044 | 136.0 | 68000 | 0.2923 | 0.8052 | 0.9439 | 0.9099 | 0.307 | 0.8025 | 0.8113 | 0.3411 | 0.8454 | 0.848 | 0.3681 | 0.8488 | 0.8457 | 0.8091 | 0.8532 | 0.7311 | 0.7811 | 0.8753 | 0.9096 |
| 0.2717 | 137.0 | 68500 | 0.2937 | 0.8071 | 0.9445 | 0.9129 | 0.2932 | 0.8037 | 0.8121 | 0.3404 | 0.8468 | 0.8494 | 0.3611 | 0.8506 | 0.8457 | 0.8116 | 0.8536 | 0.7365 | 0.7874 | 0.8732 | 0.9072 |
| 0.2083 | 138.0 | 69000 | 0.2941 | 0.8062 | 0.9444 | 0.9128 | 0.3005 | 0.8043 | 0.8084 | 0.3403 | 0.8467 | 0.8494 | 0.3639 | 0.8519 | 0.8433 | 0.8074 | 0.8512 | 0.7362 | 0.7884 | 0.8749 | 0.9084 |
| 0.2154 | 139.0 | 69500 | 0.2936 | 0.8055 | 0.9439 | 0.9101 | 0.307 | 0.8027 | 0.8108 | 0.3399 | 0.8445 | 0.847 | 0.3681 | 0.8476 | 0.846 | 0.8119 | 0.854 | 0.7297 | 0.7789 | 0.875 | 0.9081 |
| 0.1902 | 140.0 | 70000 | 0.2949 | 0.8063 | 0.9444 | 0.91 | 0.3034 | 0.8041 | 0.8116 | 0.3398 | 0.8452 | 0.8478 | 0.3639 | 0.8485 | 0.8465 | 0.8125 | 0.8548 | 0.7301 | 0.7789 | 0.8762 | 0.9096 |
| 0.2147 | 141.0 | 70500 | 0.2934 | 0.8086 | 0.9477 | 0.9116 | 0.3069 | 0.8086 | 0.8095 | 0.3406 | 0.8487 | 0.8513 | 0.3681 | 0.8546 | 0.8441 | 0.8111 | 0.8532 | 0.7396 | 0.7916 | 0.875 | 0.909 |
| 0.2194 | 142.0 | 71000 | 0.2932 | 0.8077 | 0.9475 | 0.9156 | 0.3069 | 0.8069 | 0.8075 | 0.3401 | 0.8481 | 0.8508 | 0.3681 | 0.8534 | 0.8425 | 0.8106 | 0.8536 | 0.739 | 0.7905 | 0.8734 | 0.9081 |
| 0.1975 | 143.0 | 71500 | 0.2929 | 0.8078 | 0.9477 | 0.9157 | 0.3077 | 0.8077 | 0.8063 | 0.3403 | 0.8481 | 0.8507 | 0.3681 | 0.8533 | 0.842 | 0.8099 | 0.8532 | 0.7384 | 0.7895 | 0.875 | 0.9093 |
| 0.2057 | 144.0 | 72000 | 0.2928 | 0.8073 | 0.9478 | 0.9158 | 0.3091 | 0.8059 | 0.8091 | 0.3407 | 0.8477 | 0.8502 | 0.3681 | 0.8521 | 0.8437 | 0.8101 | 0.8532 | 0.7369 | 0.7884 | 0.875 | 0.909 |
| 0.2006 | 145.0 | 72500 | 0.2929 | 0.8079 | 0.9478 | 0.9158 | 0.3048 | 0.8069 | 0.809 | 0.34 | 0.8479 | 0.8504 | 0.3639 | 0.8528 | 0.8437 | 0.8102 | 0.8524 | 0.7385 | 0.7895 | 0.875 | 0.9093 |
| 0.2465 | 146.0 | 73000 | 0.2919 | 0.8069 | 0.9477 | 0.9158 | 0.3091 | 0.8062 | 0.8072 | 0.341 | 0.8471 | 0.8497 | 0.3681 | 0.8524 | 0.8423 | 0.8104 | 0.8532 | 0.7364 | 0.7874 | 0.8738 | 0.9084 |
| 0.2178 | 147.0 | 73500 | 0.2920 | 0.8076 | 0.9477 | 0.9157 | 0.3091 | 0.8073 | 0.8067 | 0.3411 | 0.8476 | 0.8502 | 0.3681 | 0.8533 | 0.8415 | 0.8111 | 0.8528 | 0.7378 | 0.7895 | 0.8738 | 0.9084 |
| 0.1954 | 148.0 | 74000 | 0.2920 | 0.8076 | 0.9477 | 0.9158 | 0.3091 | 0.8074 | 0.8067 | 0.3412 | 0.8478 | 0.8504 | 0.3681 | 0.8535 | 0.8415 | 0.8111 | 0.8532 | 0.7378 | 0.7895 | 0.8738 | 0.9084 |
| 0.2306 | 149.0 | 74500 | 0.2920 | 0.8076 | 0.9477 | 0.9158 | 0.3091 | 0.8073 | 0.8067 | 0.3412 | 0.8476 | 0.8502 | 0.3681 | 0.8532 | 0.8415 | 0.8113 | 0.8536 | 0.7376 | 0.7884 | 0.8738 | 0.9084 |
| 0.186 | 150.0 | 75000 | 0.2920 | 0.8077 | 0.9477 | 0.9158 | 0.3091 | 0.8073 | 0.8068 | 0.3412 | 0.8477 | 0.8503 | 0.3681 | 0.8532 | 0.8416 | 0.8113 | 0.8536 | 0.7376 | 0.7884 | 0.8741 | 0.9087 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu121
- Datasets 2.19.2
- Tokenizers 0.20.1
|
PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B | PJMixers-Dev | 2024-10-28T03:20:00Z | 8 | 0 | null | [
"safetensors",
"llama",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"model-index",
"region:us"
] | null | 2024-10-14T16:13:12Z | ---
language:
- en
base_model:
- unsloth/Llama-3.2-3B-Instruct
license: llama3.2
model-index:
- name: LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 62.92
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 23.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.33
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.87
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.5
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
name: Open LLM Leaderboard
---
A much further trained version, this time done with full finetuning instead of DoRA. Similar ~50/50 mix of completion and instruct data.
Note: This likely has refusals like [PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B) since no focus was put on removing refusals. I'm working on a KTO DoRA to solve this, and possibly improve roleplay performance.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B-details)
| Metric |Value|
|-------------------|----:|
|Avg. |21.50|
|IFEval (0-Shot) |62.92|
|BBH (3-Shot) |23.34|
|MATH Lvl 5 (4-Shot)|11.33|
|GPQA (0-shot) | 3.02|
|MuSR (0-shot) | 4.87|
|MMLU-PRO (5-shot) |23.50|
|
PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B | PJMixers-Dev | 2024-10-28T03:18:41Z | 9 | 0 | null | [
"safetensors",
"llama",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"model-index",
"region:us"
] | null | 2024-10-12T01:12:46Z | ---
language:
- en
license: llama3.2
base_model:
- meta-llama/Llama-3.2-3B-Instruct
model-index:
- name: LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 69.31
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 23.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.42
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.64
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
name: Open LLM Leaderboard
---
Quick test tune overtop of `meta-llama/Llama-3.2-3B-Instruct` using a ~50/50 mix of instruct and completion data.
Note: Training nowhere near complete so I'm unsure how strong of an effect it had. Still refuses requests like `meta-llama/Llama-3.2-3B-Instruct`.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B-details)
| Metric |Value|
|-------------------|----:|
|Avg. |22.41|
|IFEval (0-Shot) |69.31|
|BBH (3-Shot) |23.81|
|MATH Lvl 5 (4-Shot)|10.42|
|GPQA (0-shot) | 3.24|
|MuSR (0-shot) | 4.05|
|MMLU-PRO (5-shot) |23.64|
|
ramonactruta/ramonactruta-llama-3.2.Instruct | ramonactruta | 2024-10-28T03:17:38Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"orpo",
"conversational",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T22:09:27Z | ---
license: llama3.2
tags:
- llama-3
- orpo
- transformers
datasets:
- mlabonne/orpo-dpo-mix-40k
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: week2-llama3-1B
results:
- task:
type: text-generation
dataset:
name: mlabonne/orpo-dpo-mix-40k
type: mlabonne/orpo-dpo-mix-40k
metrics:
- name: acc-norm (0-Shot)
type: acc-norm (0-Shot)
value: 0.6077
metrics:
- accuracy
---
<a href="top"></a>
# Llama-3.2-1B-Instruct-ORPO
[Evaluation](#evaluation) [Environmental Inpact](#environmental-impact)
## Model Details
This model was obtained by finetuning the open source [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
model on the [mlabonne/orpo-dpo-mix-40k](mlabonne/orpo-dpo-mix-40k) dataset, leveraging
[Odds Ratio Preference Optimization (ORPO)](https://github.com/xfactlab/orpo) for Reinforcement Learning.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is optimized for general-purpose language tasks.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We used the [Eulether](https://github.com/EleutherAI/lm-evaluation-harness) test harness to evaluate the finetuned model.
The table below presents a summary of the evaluation performed.
For a more granular evaluation on `MMLU`, please see Section [MMLU](#mmlu).
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag| 1|none | 0|acc |↑ |0.4507|± |0.0050|
| | |none | 0|acc_norm|↑ |0.6077|± |0.0049|
|arc_easy| 1|none | 0|acc |↑ |0.6856|± |0.0095|
| | |none | 0|acc_norm|↑ |0.6368|± |0.0099|
|mmlu | 2|none | |acc |↑ |0.4597|± |0.0041|
| - humanities | 2|none | |acc |↑ |0.4434|± |0.0071|
| - other | 2|none | |acc |↑ |0.5163|± |0.0088|
| - social sciences| 2|none | |acc |↑ |0.5057|± |0.0088|
| - stem | 2|none | |acc |↑ |0.3834|± |0.0085|
[Top](#top)
### MMLU
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.4597|± |0.0041|
| - humanities | 2|none | |acc |↑ |0.4434|± |0.0071|
| - formal_logic | 1|none | 0|acc |↑ |0.3254|± |0.0419|
| - high_school_european_history | 1|none | 0|acc |↑ |0.6182|± |0.0379|
| - high_school_us_history | 1|none | 0|acc |↑ |0.5784|± |0.0347|
| - high_school_world_history | 1|none | 0|acc |↑ |0.6540|± |0.0310|
| - international_law | 1|none | 0|acc |↑ |0.6033|± |0.0447|
| - jurisprudence | 1|none | 0|acc |↑ |0.5370|± |0.0482|
| - logical_fallacies | 1|none | 0|acc |↑ |0.4479|± |0.0391|
| - moral_disputes | 1|none | 0|acc |↑ |0.4711|± |0.0269|
| - moral_scenarios | 1|none | 0|acc |↑ |0.3408|± |0.0159|
| - philosophy | 1|none | 0|acc |↑ |0.5177|± |0.0284|
| - prehistory | 1|none | 0|acc |↑ |0.5278|± |0.0278|
| - professional_law | 1|none | 0|acc |↑ |0.3683|± |0.0123|
| - world_religions | 1|none | 0|acc |↑ |0.5906|± |0.0377|
| - other | 2|none | |acc |↑ |0.5163|± |0.0088|
| - business_ethics | 1|none | 0|acc |↑ |0.4300|± |0.0498|
| - clinical_knowledge | 1|none | 0|acc |↑ |0.4642|± |0.0307|
| - college_medicine | 1|none | 0|acc |↑ |0.3815|± |0.0370|
| - global_facts | 1|none | 0|acc |↑ |0.3200|± |0.0469|
| - human_aging | 1|none | 0|acc |↑ |0.5157|± |0.0335|
| - management | 1|none | 0|acc |↑ |0.5243|± |0.0494|
| - marketing | 1|none | 0|acc |↑ |0.6709|± |0.0308|
| - medical_genetics | 1|none | 0|acc |↑ |0.4800|± |0.0502|
| - miscellaneous | 1|none | 0|acc |↑ |0.6015|± |0.0175|
| - nutrition | 1|none | 0|acc |↑ |0.5686|± |0.0284|
| - professional_accounting | 1|none | 0|acc |↑ |0.3511|± |0.0285|
| - professional_medicine | 1|none | 0|acc |↑ |0.5625|± |0.0301|
| - virology | 1|none | 0|acc |↑ |0.4157|± |0.0384|
| - social sciences | 2|none | |acc |↑ |0.5057|± |0.0088|
| - econometrics | 1|none | 0|acc |↑ |0.2456|± |0.0405|
| - high_school_geography | 1|none | 0|acc |↑ |0.5606|± |0.0354|
| - high_school_government_and_politics| 1|none | 0|acc |↑ |0.5389|± |0.0360|
| - high_school_macroeconomics | 1|none | 0|acc |↑ |0.4128|± |0.0250|
| - high_school_microeconomics | 1|none | 0|acc |↑ |0.4454|± |0.0323|
| - high_school_psychology | 1|none | 0|acc |↑ |0.6183|± |0.0208|
| - human_sexuality | 1|none | 0|acc |↑ |0.5420|± |0.0437|
| - professional_psychology | 1|none | 0|acc |↑ |0.4167|± |0.0199|
| - public_relations | 1|none | 0|acc |↑ |0.5000|± |0.0479|
| - security_studies | 1|none | 0|acc |↑ |0.5265|± |0.0320|
| - sociology | 1|none | 0|acc |↑ |0.6468|± |0.0338|
| - us_foreign_policy | 1|none | 0|acc |↑ |**0.6900**|± |0.0465|
| - stem | 2|none | |acc |↑ |0.3834|± |0.0085|
| - abstract_algebra | 1|none | 0|acc |↑ |0.2500|± |0.0435|
| - anatomy | 1|none | 0|acc |↑ |0.4889|± |0.0432|
| - astronomy | 1|none | 0|acc |↑ |0.5329|± |0.0406|
| - college_biology | 1|none | 0|acc |↑ |0.4931|± |0.0418|
| - college_chemistry | 1|none | 0|acc |↑ |0.3800|± |0.0488|
| - college_computer_science | 1|none | 0|acc |↑ |0.3300|± |0.0473|
| - college_mathematics | 1|none | 0|acc |↑ |0.2800|± |0.0451|
| - college_physics | 1|none | 0|acc |↑ |0.2451|± |0.0428|
| - computer_security | 1|none | 0|acc |↑ |0.4800|± |0.0502|
| - conceptual_physics | 1|none | 0|acc |↑ |0.4383|± |0.0324|
| - electrical_engineering | 1|none | 0|acc |↑ |0.5310|± |0.0416|
| - elementary_mathematics | 1|none | 0|acc |↑ |0.2884|± |0.0233|
| - high_school_biology | 1|none | 0|acc |↑ |0.4935|± |0.0284|
| - high_school_chemistry | 1|none | 0|acc |↑ |0.3645|± |0.0339|
| - high_school_computer_science | 1|none | 0|acc |↑ |0.4500|± |0.0500|
| - high_school_mathematics | 1|none | 0|acc |↑ |0.2815|± |0.0274|
| - high_school_physics | 1|none | 0|acc |↑ |0.3113|± |0.0378|
| - high_school_statistics | 1|none | 0|acc |↑ |0.3657|± |0.0328|
| - machine_learning | 1|none | 0|acc |↑ |0.2768|± |0.0425|
[Top](#top)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** MacBook Air M1
- **Hours used:** 1
- **Cloud Provider:** GPC, A100
- **Compute Region:** US-EAST1
- **Carbon Emitted:** 0.09 kgCO<sub>2</sub> of which 100 percents were directly offset by the cloud provider.
[Top](#top) |
BEE-spoke-data/tFINE-680m-e32-d16-gqa-flan | BEE-spoke-data | 2024-10-28T03:15:09Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"flan",
"gqa",
"instruct",
"en",
"dataset:pszemraj/flan-subsets-deduped",
"base_model:BEE-spoke-data/tFINE-680m-e32-d16-gqa-1024",
"base_model:finetune:BEE-spoke-data/tFINE-680m-e32-d16-gqa-1024",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-16T22:03:26Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: BEE-spoke-data/tFINE-680m-e32-d16-gqa-1024
tags:
- flan
- t5
- gqa
- instruct
datasets:
- pszemraj/flan-subsets-deduped
---
# tFINE-680m-e32-d16-gqa-flan
FLAN-tuned variant of a tFINE (t5) model with GQA.
- 32 encoder layers
- 16 decoder layers
- 1024 hidden size
## testing
install [transformers fork with GQA updates for t5](https://github.com/pszemraj/transformers/tree/t5-gqa) (⚠️WIP🚧):
```sh
pip install -U git+https://github.com/pszemraj/transformers.git@t5-gqa
```
then
```py
# pip install -U git+https://github.com/pszemraj/transformers.git@t5-gqa
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("BEE-spoke-data/tFINE-680m-e32-d16-gqa-flan")
model = AutoModelForSeq2SeqLM.from_pretrained(
"BEE-spoke-data/tFINE-680m-e32-d16-gqa-flan"
)
prompt = "What is the capital of France?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=64, no_repeat_ngram_size=3)
print(
tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)[0]
)
```
## Quick eval
Quick eval for: `BEE-spoke-data/tFINE-680m-e32-d16-gqa-flan`
hf (pretrained=BEE-spoke-data/tFINE-680m-e32-d16-gqa-flan,trust_remote_code=True,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-------------|------:|------|-----:|--------|---|-----:|---|------|
|boolq | 2|none | 0|acc |↑ |0.7040|± |0.0080|
|openbookqa | 1|none | 0|acc |↑ |0.1580|± |0.0163|
| | |none | 0|acc_norm|↑ |0.2420|± |0.0192|
|piqa | 1|none | 0|acc |↑ |0.6132|± |0.0114|
| | |none | 0|acc_norm|↑ |0.6159|± |0.0113|
|social_iqa | 0|none | 0|acc |↑ |0.4319|± |0.0112|
|tinyArc | 0|none | 25|acc_norm|↑ |0.2898|± | N/A|
|tinyHellaswag| 0|none | 10|acc_norm|↑ |0.3295|± | N/A|
|tinyMMLU | 0|none | 0|acc_norm|↑ |0.2980|± | N/A|
|winogrande | 1|none | 0|acc |↑ |0.5020|± |0.0141|
## Training and evaluation data
used config 'all'
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 17868
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 4
- optimizer: Use paged_ademamix_32bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
|
foundationmodels/MIMIC-medical-report | foundationmodels | 2024-10-28T03:14:30Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-04T18:12:36Z | # MIMIC-Medical-Report Model
## Overview
This project presents a fine-tuned model based on Microsoft's PHI-2, trained on the MIMIC dataset using Python and PyTorch. Leveraging Hugging Face's Transformers library, this model significantly enhances AI's capacity to extract critical medical insights, improving diagnostic accuracy in healthcare.
## Features
- **Model Architecture**: Fine-tuned PHI-2 model using Transformer-based architecture
- **Dataset**: MIMIC medical dataset, preprocessed to ensure high data integrity
- **Purpose**: Assists in generating detailed medical reports, extracting key insights to support clinical decisions
|
RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf | RichardErkhov | 2024-10-28T03:14:11Z | 15 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T18:58:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LiquidCrystal_V3-20B - GGUF
- Model creator: https://huggingface.co/Elfrino/
- Original model: https://huggingface.co/Elfrino/LiquidCrystal_V3-20B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LiquidCrystal_V3-20B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q2_K.gguf) | Q2_K | 6.91GB |
| [LiquidCrystal_V3-20B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [LiquidCrystal_V3-20B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q3_K.gguf) | Q3_K | 9.04GB |
| [LiquidCrystal_V3-20B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [LiquidCrystal_V3-20B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [LiquidCrystal_V3-20B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [LiquidCrystal_V3-20B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q4_0.gguf) | Q4_0 | 10.52GB |
| [LiquidCrystal_V3-20B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [LiquidCrystal_V3-20B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [LiquidCrystal_V3-20B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q4_K.gguf) | Q4_K | 11.22GB |
| [LiquidCrystal_V3-20B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [LiquidCrystal_V3-20B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q4_1.gguf) | Q4_1 | 11.67GB |
| [LiquidCrystal_V3-20B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q5_0.gguf) | Q5_0 | 12.83GB |
| [LiquidCrystal_V3-20B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [LiquidCrystal_V3-20B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q5_K.gguf) | Q5_K | 13.18GB |
| [LiquidCrystal_V3-20B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [LiquidCrystal_V3-20B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q5_1.gguf) | Q5_1 | 13.98GB |
| [LiquidCrystal_V3-20B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q6_K.gguf) | Q6_K | 15.28GB |
| [LiquidCrystal_V3-20B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_LiquidCrystal_V3-20B-gguf/blob/main/LiquidCrystal_V3-20B.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
base_model:
- Undi95/PsyMedRP-v1-20B
- Elfrino/XwinXtended-20B
library_name: transformers
tags:
- mergekit
- merge
---

A merge between an ol' fav: [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)
and:
An extended XwinLM: [Elfrino/XwinXtended-20B](https://huggingface.co/Elfrino/XwinXtended-20B)
Early results look promising but more testing is needed....
##############################################################################################
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)
* [Elfrino/XwinXtended-20B](https://huggingface.co/Elfrino/XwinXtended-20B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [0, 62]
- model: Elfrino/XwinXtended-20B
layer_range: [0, 62]
merge_method: slerp
base_model: Undi95/PsyMedRP-v1-20B
parameters:
t:
- filter: self_attn
value: [0.8, 0.8, 0.9, 0.7, .8]
- filter: mlp
value: [.8, 0.8, 0.9, 0.8, .7]
- value: 0.2
dtype: bfloat16
```
|
nhyha/N3N_Llama-3.1-8B-Instruct_1028_0216 | nhyha | 2024-10-28T03:11:42Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T03:01:56Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** nhyha
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yutaka77/bert-base-japanese-v3-wrime-sentiment-yutaka | Yutaka77 | 2024-10-28T03:07:26Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T03:07:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JhonMR/DistriBert_TPF_v9 | JhonMR | 2024-10-28T03:05:46Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T03:00:51Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: DistriBert_TPF_v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistriBert_TPF_v9
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy@en: 0.8349
- F1@en: 0.8353
- Precision@en: 0.8406
- Recall@en: 0.8358
- Loss@en: 0.5675
- Loss: 0.5675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy@en | F1@en | Precision@en | Recall@en | Loss@en | Validation Loss |
|:-------------:|:-----:|:----:|:-----------:|:------:|:------------:|:---------:|:-------:|:---------------:|
| 3.3814 | 1.0 | 276 | 0.1672 | 0.1000 | 0.1207 | 0.1699 | 2.9259 | 2.9259 |
| 2.705 | 2.0 | 552 | 0.2122 | 0.1458 | 0.1464 | 0.2121 | 2.4904 | 2.4904 |
| 2.4079 | 3.0 | 828 | 0.2664 | 0.2101 | 0.2919 | 0.2674 | 2.2948 | 2.2948 |
| 2.1179 | 4.0 | 1104 | 0.3952 | 0.3393 | 0.3872 | 0.3941 | 1.9555 | 1.9555 |
| 1.7552 | 5.0 | 1380 | 0.4966 | 0.4568 | 0.4748 | 0.4960 | 1.5776 | 1.5776 |
| 1.4478 | 6.0 | 1656 | 0.5635 | 0.5259 | 0.5638 | 0.5640 | 1.3563 | 1.3563 |
| 1.2016 | 7.0 | 1932 | 0.6286 | 0.5985 | 0.6485 | 0.6334 | 1.1235 | 1.1235 |
| 1.0053 | 8.0 | 2208 | 0.6971 | 0.6751 | 0.7205 | 0.6954 | 0.9861 | 0.9861 |
| 0.8457 | 9.0 | 2484 | 0.7537 | 0.7466 | 0.7582 | 0.7534 | 0.8329 | 0.8329 |
| 0.7163 | 10.0 | 2760 | 0.7831 | 0.7780 | 0.7957 | 0.7832 | 0.7397 | 0.7397 |
| 0.6167 | 11.0 | 3036 | 0.7992 | 0.7962 | 0.8098 | 0.7995 | 0.6955 | 0.6955 |
| 0.5421 | 12.0 | 3312 | 0.7995 | 0.7923 | 0.8148 | 0.7996 | 0.6953 | 0.6953 |
| 0.4798 | 13.0 | 3588 | 0.8212 | 0.8209 | 0.8311 | 0.8218 | 0.6242 | 0.6242 |
| 0.4415 | 14.0 | 3864 | 0.8278 | 0.8273 | 0.8370 | 0.8284 | 0.6013 | 0.6013 |
| 0.3899 | 15.0 | 4140 | 0.8325 | 0.8331 | 0.8439 | 0.8329 | 0.5969 | 0.5969 |
| 0.3671 | 16.0 | 4416 | 0.8349 | 0.8353 | 0.8406 | 0.8358 | 0.5675 | 0.5675 |
| 0.3278 | 17.0 | 4692 | 0.8392 | 0.8399 | 0.8459 | 0.8401 | 0.5784 | 0.5784 |
| 0.3041 | 18.0 | 4968 | 0.8392 | 0.8390 | 0.8451 | 0.8399 | 0.5765 | 0.5765 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
asr-africa/w2v-bert-2.0-CV_Fleurs-lg-100hrs-v4 | asr-africa | 2024-10-28T03:03:12Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-26T18:41:14Z | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-CV_Fleurs-lg-100hrs-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-CV_Fleurs-lg-100hrs-v4
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2835
- Wer: 0.2749
- Cer: 0.0546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 1.3102 | 1.0 | 7057 | 0.3038 | 0.3739 | 0.0787 |
| 0.1944 | 2.0 | 14114 | 0.2340 | 0.3188 | 0.0634 |
| 0.1558 | 3.0 | 21171 | 0.2157 | 0.3021 | 0.0598 |
| 0.1373 | 4.0 | 28228 | 0.2037 | 0.2926 | 0.0574 |
| 0.1237 | 5.0 | 35285 | 0.2079 | 0.3025 | 0.0591 |
| 0.1152 | 6.0 | 42342 | 0.2044 | 0.2899 | 0.0583 |
| 0.1091 | 7.0 | 49399 | 0.2025 | 0.3012 | 0.0567 |
| 0.1042 | 8.0 | 56456 | 0.2033 | 0.2728 | 0.0552 |
| 0.0985 | 9.0 | 63513 | 0.2035 | 0.2760 | 0.0543 |
| 0.0949 | 10.0 | 70570 | 0.2054 | 0.2730 | 0.0541 |
| 0.0885 | 11.0 | 77627 | 0.2071 | 0.2758 | 0.0557 |
| 0.08 | 12.0 | 84684 | 0.2039 | 0.2743 | 0.0540 |
| 0.0719 | 13.0 | 91741 | 0.2045 | 0.2648 | 0.0531 |
| 0.063 | 14.0 | 98798 | 0.2133 | 0.2682 | 0.0544 |
| 0.0555 | 15.0 | 105855 | 0.2107 | 0.2658 | 0.0524 |
| 0.0471 | 16.0 | 112912 | 0.2408 | 0.2687 | 0.0529 |
| 0.0399 | 17.0 | 119969 | 0.2419 | 0.2721 | 0.0535 |
| 0.0341 | 18.0 | 127026 | 0.2587 | 0.2833 | 0.0551 |
| 0.0293 | 19.0 | 134083 | 0.2518 | 0.2735 | 0.0534 |
| 0.0255 | 20.0 | 141140 | 0.2795 | 0.2669 | 0.0530 |
| 0.022 | 21.0 | 148197 | 0.2748 | 0.2820 | 0.0543 |
| 0.0196 | 22.0 | 155254 | 0.2837 | 0.2785 | 0.0529 |
| 0.017 | 23.0 | 162311 | 0.2835 | 0.2749 | 0.0546 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.1.0+cu118
- Datasets 3.0.2
- Tokenizers 0.20.1
|
unicamp-dl/monoptt5-3b | unicamp-dl | 2024-10-28T02:57:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:unicamp-dl/mmarco",
"arxiv:2008.09144",
"arxiv:2406.10806",
"base_model:unicamp-dl/ptt5-v2-3b",
"base_model:finetune:unicamp-dl/ptt5-v2-3b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-09T05:20:21Z | ---
datasets:
- unicamp-dl/mmarco
language:
- pt
pipeline_tag: text2text-generation
base_model: unicamp-dl/ptt5-v2-3b
license: apache-2.0
---
## Introduction
MonoPTT5 models are T5 rerankers for the Portuguese language. Starting from [ptt5-v2 checkpoints](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0), they were trained for 100k steps on a mixture of Portuguese and English data from the mMARCO dataset.
For further information on the training and evaluation of these models, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).
## Usage
The easiest way to use our models is through the `rerankers` package. After installing the package using `pip install rerankers[transformers]`, the following code can be used as a minimal working example:
```python
from rerankers import Reranker
import torch
query = "O futebol é uma paixão nacional"
docs = [
"O futebol é superestimado e não deveria receber tanta atenção.",
"O futebol é uma parte essencial da cultura brasileira e une as pessoas.",
]
ranker = Reranker(
"unicamp-dl/monoptt5-3b",
inputs_template="Pergunta: {query} Documento: {text} Relevante:",
dtype=torch.float32 # or bfloat16 if supported by your GPU
)
results = ranker.rank(query, docs)
print("Classification results:")
for result in results:
print(result)
# Loading T5Ranker model unicamp-dl/monoptt5-3b
# No device set
# Using device cuda
# Using dtype torch.float32
# Loading model unicamp-dl/monoptt5-3b, this might take a while...
# Using device cuda.
# Using dtype torch.float32.
# T5 true token set to ▁Sim
# T5 false token set to ▁Não
# Returning normalised scores...
# Inputs template set to Pergunta: {query} Documento: {text} Relevante:
# Classification results:
# document=Document(text='O futebol é uma parte essencial da cultura brasileira e une as pessoas.', doc_id=1, metadata={}) score=0.9612176418304443 rank=1
# document=Document(text='O futebol é superestimado e não deveria receber tanta atenção.', doc_id=0, metadata={}) score=0.09502816945314407 rank=2
```
For additional configurations and more advanced usage, consult the `rerankers` [GitHub repository](https://github.com/AnswerDotAI/rerankers).
## Citation
If you use our models, please cite:
```
@misc{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
unicamp-dl/monoptt5-base | unicamp-dl | 2024-10-28T02:57:11Z | 176 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:unicamp-dl/mmarco",
"arxiv:2008.09144",
"arxiv:2406.10806",
"base_model:unicamp-dl/ptt5-v2-base",
"base_model:finetune:unicamp-dl/ptt5-v2-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-09T05:02:31Z | ---
datasets:
- unicamp-dl/mmarco
language:
- pt
pipeline_tag: text2text-generation
base_model: unicamp-dl/ptt5-v2-base
license: apache-2.0
---
## Introduction
MonoPTT5 models are T5 rerankers for the Portuguese language. Starting from [ptt5-v2 checkpoints](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0), they were trained for 100k steps on a mixture of Portuguese and English data from the mMARCO dataset.
For further information on the training and evaluation of these models, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).
## Usage
The easiest way to use our models is through the `rerankers` package. After installing the package using `pip install rerankers[transformers]`, the following code can be used as a minimal working example:
```python
from rerankers import Reranker
import torch
query = "O futebol é uma paixão nacional"
docs = [
"O futebol é superestimado e não deveria receber tanta atenção.",
"O futebol é uma parte essencial da cultura brasileira e une as pessoas.",
]
ranker = Reranker(
"unicamp-dl/monoptt5-base",
inputs_template="Pergunta: {query} Documento: {text} Relevante:",
dtype=torch.float32 # or bfloat16 if supported by your GPU
)
results = ranker.rank(query, docs)
print("Classification results:")
for result in results:
print(result)
# Loading T5Ranker model unicamp-dl/monoptt5-base
# No device set
# Using device cuda
# Using dtype torch.float32
# Loading model unicamp-dl/monoptt5-base, this might take a while...
# Using device cuda.
# Using dtype torch.float32.
# T5 true token set to ▁Sim
# T5 false token set to ▁Não
# Returning normalised scores...
# Inputs template set to Pergunta: {query} Documento: {text} Relevante:
# Classification results:
# document=Document(text='O futebol é uma parte essencial da cultura brasileira e une as pessoas.', doc_id=1, metadata={}) score=0.8186910152435303 rank=1
# document=Document(text='O futebol é superestimado e não deveria receber tanta atenção.', doc_id=0, metadata={}) score=0.008028557524085045 rank=2
```
For additional configurations and more advanced usage, consult the `rerankers` [GitHub repository](https://github.com/AnswerDotAI/rerankers).
## Citation
If you use our models, please cite:
```
@misc{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
xiny47661/task-13-Qwen-Qwen1.5-1.8B | xiny47661 | 2024-10-28T02:57:10Z | 59 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-10-08T15:34:05Z | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
unicamp-dl/monoptt5-small | unicamp-dl | 2024-10-28T02:57:05Z | 251 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:unicamp-dl/mmarco",
"arxiv:2008.09144",
"arxiv:2406.10806",
"base_model:unicamp-dl/ptt5-v2-small",
"base_model:finetune:unicamp-dl/ptt5-v2-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-09T05:02:03Z | ---
datasets:
- unicamp-dl/mmarco
language:
- pt
pipeline_tag: text2text-generation
base_model: unicamp-dl/ptt5-v2-small
license: apache-2.0
---
## Introduction
MonoPTT5 models are T5 rerankers for the Portuguese language. Starting from [ptt5-v2 checkpoints](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0), they were trained for 100k steps on a mixture of Portuguese and English data from the mMARCO dataset.
For further information on the training and evaluation of these models, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).
## Usage
The easiest way to use our models is through the `rerankers` package. After installing the package using `pip install rerankers[transformers]`, the following code can be used as a minimal working example:
```python
from rerankers import Reranker
import torch
query = "O futebol é uma paixão nacional"
docs = [
"O futebol é superestimado e não deveria receber tanta atenção.",
"O futebol é uma parte essencial da cultura brasileira e une as pessoas.",
]
ranker = Reranker(
"unicamp-dl/monoptt5-small",
inputs_template="Pergunta: {query} Documento: {text} Relevante:",
dtype=torch.float32 # or bfloat16 if supported by your GPU
)
results = ranker.rank(query, docs)
print("Classification results:")
for result in results:
print(result)
# Loading T5Ranker model unicamp-dl/monoptt5-small
# No device set
# Using device cuda
# Using dtype torch.float32
# Loading model unicamp-dl/monoptt5-small, this might take a while...
# Using device cuda.
# Using dtype torch.float32.
# T5 true token set to ▁Sim
# T5 false token set to ▁Não
# Returning normalised scores...
# Inputs template set to Pergunta: {query} Documento: {text} Relevante:
# Classification results:
# document=Document(text='O futebol é uma parte essencial da cultura brasileira e une as pessoas.', doc_id=1, metadata={}) score=0.9192759394645691 rank=1
# document=Document(text='O futebol é superestimado e não deveria receber tanta atenção.', doc_id=0, metadata={}) score=0.026855656877160072 rank=2
```
For additional configurations and more advanced usage, consult the `rerankers` [GitHub repository](https://github.com/AnswerDotAI/rerankers).
## Citation
If you use our models, please cite:
```
@misc{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
unicamp-dl/ptt5-v2-large | unicamp-dl | 2024-10-28T02:56:31Z | 375 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:allenai/c4",
"dataset:legacy-datasets/mc4",
"arxiv:2008.09144",
"arxiv:2406.10806",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-09T04:51:47Z | ---
datasets:
- allenai/c4
- legacy-datasets/mc4
language:
- pt
pipeline_tag: text2text-generation
base_model: google-t5/t5-large
license: apache-2.0
---
# ptt5-v2-large
## Introduction
[ptt5-v2 models](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0) are pretrained T5 models tailored for the Portuguese language, continuing from Google's original checkpoints with sizes from t5-small to t5-3B.
These checkpoints were used to train MonoT5 rerankers for the Portuguese language, which can be found in their [HuggingFace collection](https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d).
For further information about the pretraining process, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-large")
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-large")
```
## Citation
If you use our models, please cite:
```
@misc{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
unicamp-dl/ptt5-v2-base | unicamp-dl | 2024-10-28T02:56:21Z | 541 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:allenai/c4",
"dataset:legacy-datasets/mc4",
"arxiv:2008.09144",
"arxiv:2406.10806",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-09T04:50:50Z | ---
datasets:
- allenai/c4
- legacy-datasets/mc4
language:
- pt
pipeline_tag: text2text-generation
base_model: google-t5/t5-base
license: apache-2.0
---
# ptt5-v2-base
## Introduction
[ptt5-v2 models](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0) are pretrained T5 models tailored for the Portuguese language, continuing from Google's original checkpoints with sizes from t5-small to t5-3B.
These checkpoints were used to train MonoT5 rerankers for the Portuguese language, which can be found in their [HuggingFace collection](https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d).
For further information about the pretraining process, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-base")
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-base")
```
## Citation
If you use our models, please cite:
```
@article{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
xiny47661/task-13-google-gemma-2b | xiny47661 | 2024-10-28T02:50:52Z | 131 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-10-08T15:36:42Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
relaxml/Llama-3.1-405B-Instruct-QTIP-4Bit | relaxml | 2024-10-28T02:49:30Z | 8 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-10-24T02:18:00Z | 
|
relaxml/Llama-3.1-405B-Instruct-QTIP-2Bit-TP8 | relaxml | 2024-10-28T02:49:19Z | 5 | 1 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-10-19T20:09:38Z | This model is compatible with tensor parallelism. The RHT runs per-GPU instead of across GPUs. q, k, v, up, and gate are split along the output channel, and o and down are split along the input channel.
This model has slightly worse quality than the non "TP8" model. |
relaxml/Llama-3.1-405B-Instruct-QTIP-4Bit-TP8 | relaxml | 2024-10-28T02:49:01Z | 6 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-10-19T20:48:52Z | This model is compatible with tensor parallelism. The RHT runs per-GPU instead of across GPUs. q, k, v, up, and gate are split along the output channel, and o and down are split along the input channel.
This model has slightly worse quality than the non "TP8" model. |
Yutaka77/bert-base-japanese-v3-wrime-sentiment_yutaka | Yutaka77 | 2024-10-28T02:48:46Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T02:48:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MiniPLM-Mamba-130M-GGUF | mradermacher | 2024-10-28T02:48:10Z | 42 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:monology/pile-uncopyrighted",
"dataset:MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5",
"base_model:MiniLLM/MiniPLM-Mamba-130M",
"base_model:quantized:MiniLLM/MiniPLM-Mamba-130M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T08:10:13Z | ---
base_model: MiniLLM/MiniPLM-Mamba-130M
datasets:
- monology/pile-uncopyrighted
- MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MiniLLM/MiniPLM-Mamba-130M
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MiniPLM-Mamba-130M-GGUF/resolve/main/MiniPLM-Mamba-130M.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tmchan0003/bert-base-uncased-financial-news-sentiment | tmchan0003 | 2024-10-28T02:43:58Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-03T03:50:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
relaxml/Llama-2-7b-QTIP-4Bit | relaxml | 2024-10-28T02:43:44Z | 7 | 2 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T02:35:54Z | 
|
relaxml/Llama-2-7b-QTIP-3Bit | relaxml | 2024-10-28T02:43:32Z | 7 | 1 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T01:54:07Z | 
|
relaxml/Llama-2-13b-QTIP-3Bit | relaxml | 2024-10-28T02:43:20Z | 17 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T01:58:12Z | 
|
relaxml/Llama-2-70b-QTIP-4Bit | relaxml | 2024-10-28T02:42:37Z | 5 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-30T02:51:50Z | 
|
mrinaldi/gattina-ns-classifier-fpt | mrinaldi | 2024-10-28T02:42:28Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T02:36:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
relaxml/Llama-2-70b-QTIP-2Bit | relaxml | 2024-10-28T02:42:14Z | 5 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-09-05T13:00:43Z | 
|
jinoy/Llama-3.2-1B_synaptron | jinoy | 2024-10-28T02:41:06Z | 173 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T02:37:10Z | ---
library_name: transformers
tags:
- text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits