modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SachinGenAIMaster/tiny-chatbot-dpo | SachinGenAIMaster | 2024-05-19T06:34:10Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:32:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-chatbot-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Yossh/Illust_Text_Detection | Yossh | 2024-05-19T06:33:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:27:51Z | ---
license: apache-2.0
---
アニメイラストのセリフや擬音を検出するモデルです
ベースモデルにInternViT-6B-448px-V1-5を使用しています。
https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5を使用しています。
ベースモデルのpooler_output層にこんな感じに繋げば使えると思います。
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
class CustomModel(nn.Module):
def __init__(self, base_model, num_classes=2):
super(CustomModel, self).__init__()
self.base_model = base_model
self.classifier = nn.Linear(base_model.config.hidden_size, num_classes).to(torch.bfloat16)
def forward(self, x):
outputs = self.base_model(x)
pooled_output = outputs.pooler_output
logits = self.classifier(pooled_output)
return logits
base_model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-5',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
model = CustomModel(base_model, num_classes=2).to(device).eval()
model.classifier.load_state_dict(torch.load("checkpoints/classifier_weights.pth"))
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values.to(torch.bfloat16).cuda()
with torch.no_grad():
outputs = model(pixel_values)
``` |
apwic/sentiment-lora-r8a1d0.05-1 | apwic | 2024-05-19T06:33:14Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-19T06:00:01Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r8a1d0.05-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r8a1d0.05-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
- Accuracy: 0.8697
- Precision: 0.8474
- Recall: 0.8328
- F1: 0.8395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5657 | 1.0 | 122 | 0.5161 | 0.7243 | 0.6616 | 0.6474 | 0.6529 |
| 0.5088 | 2.0 | 244 | 0.4913 | 0.7393 | 0.6917 | 0.7056 | 0.6971 |
| 0.4682 | 3.0 | 366 | 0.4424 | 0.7845 | 0.7401 | 0.7425 | 0.7413 |
| 0.4114 | 4.0 | 488 | 0.3980 | 0.8095 | 0.7702 | 0.7702 | 0.7702 |
| 0.3862 | 5.0 | 610 | 0.3890 | 0.8145 | 0.7783 | 0.8088 | 0.7889 |
| 0.3512 | 6.0 | 732 | 0.3583 | 0.8496 | 0.8245 | 0.8036 | 0.8128 |
| 0.3428 | 7.0 | 854 | 0.3496 | 0.8521 | 0.8207 | 0.8254 | 0.8229 |
| 0.3254 | 8.0 | 976 | 0.3425 | 0.8496 | 0.8245 | 0.8036 | 0.8128 |
| 0.3226 | 9.0 | 1098 | 0.3388 | 0.8571 | 0.8310 | 0.8189 | 0.8245 |
| 0.3063 | 10.0 | 1220 | 0.3376 | 0.8647 | 0.8439 | 0.8217 | 0.8315 |
| 0.2939 | 11.0 | 1342 | 0.3319 | 0.8672 | 0.8463 | 0.8260 | 0.8351 |
| 0.2838 | 12.0 | 1464 | 0.3323 | 0.8546 | 0.8263 | 0.8196 | 0.8229 |
| 0.2916 | 13.0 | 1586 | 0.3283 | 0.8647 | 0.8472 | 0.8167 | 0.8296 |
| 0.2826 | 14.0 | 1708 | 0.3244 | 0.8672 | 0.8463 | 0.8260 | 0.8351 |
| 0.2739 | 15.0 | 1830 | 0.3231 | 0.8697 | 0.8449 | 0.8378 | 0.8412 |
| 0.2674 | 16.0 | 1952 | 0.3221 | 0.8697 | 0.8449 | 0.8378 | 0.8412 |
| 0.2648 | 17.0 | 2074 | 0.3193 | 0.8722 | 0.8528 | 0.8321 | 0.8413 |
| 0.2687 | 18.0 | 2196 | 0.3172 | 0.8697 | 0.8460 | 0.8353 | 0.8404 |
| 0.264 | 19.0 | 2318 | 0.3170 | 0.8747 | 0.8552 | 0.8363 | 0.8448 |
| 0.2637 | 20.0 | 2440 | 0.3148 | 0.8697 | 0.8474 | 0.8328 | 0.8395 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2 | Zoyd | 2024-05-19T06:30:53Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"arxiv:2405.03548",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-19T06:22:48Z | ---
license: mit
language:
- en
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21
| Quant | Model Size | lm_head |
| ----- | ---------- | ------- |
| [3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_0bpw_exl2) | 16960 MB | 6 |
| [3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_5bpw_exl2) | 19723 MB | 6 |
| [3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2) | 21106 MB | 6 |
| [4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2) | 22447 MB | 6 |
| [4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2) | 23877 MB | 6 |
| [5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2) | 28025 MB | 6 |
| [6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_0bpw_exl2) | 33581 MB | 8 |
| [6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_5bpw_exl2) | 36250 MB | 8 |
| [8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-8_0bpw_exl2) | 41879 MB | 8 |
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:-----------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:---------|
| **MAmmoTH2-7B** | 26.7 | 34.2 | 67.4 | 34.8 | 60.6 | 60.0 | 81.8 | 52.2 |
| **MAmmoTH2-8B** | 29.7 | 33.4 | 67.9 | 38.4 | 61.0 | 60.8 | 81.0 | 53.1 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** | 29.2 | 45.0 | 84.7 | 36.8 | 64.5 | 63.1 | 83.0 | 58.0 |
| **MAmmoTH2-8B-Plus** | 32.5 | 42.8 | 84.1 | 37.3 | 65.7 | 67.8 | 83.4 | 59.1 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
``` |
theGhoul21/srl-adapter-irpo-6000 | theGhoul21 | 2024-05-19T06:28:35Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai",
"base_model:adapter:theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai",
"region:us"
] | null | 2024-05-17T04:44:58Z | ---
library_name: peft
base_model: theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai-6000 | theGhoul21 | 2024-05-19T06:24:50Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T06:22:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
udev4096/docker-commands | udev4096 | 2024-05-19T06:23:43Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T06:10:14Z | ---
license: mit
---
fine tuned phi3 using docker commands dataset from [adeocybersecurity/DockerCommand](https://huggingface.co/datasets/adeocybersecurity/DockerCommand). |
sskr/toxic_classification | sskr | 2024-05-19T06:23:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:23:39Z | ---
license: apache-2.0
---
|
Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2 | Zoyd | 2024-05-19T06:21:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"arxiv:2405.03548",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-19T06:14:54Z | ---
license: mit
language:
- en
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.0.21
| Quant | Model Size | lm_head |
| ----- | ---------- | ------- |
| [3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_0bpw_exl2) | 16960 MB | 6 |
| [3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_5bpw_exl2) | 19723 MB | 6 |
| [3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2) | 21106 MB | 6 |
| [4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2) | 22447 MB | 6 |
| [4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2) | 23877 MB | 6 |
| [5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2) | 28025 MB | 6 |
| [6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_0bpw_exl2) | 33581 MB | 8 |
| [6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_5bpw_exl2) | 36250 MB | 8 |
| [8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-8_0bpw_exl2) | 41879 MB | 8 |
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:-----------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:---------|
| **MAmmoTH2-7B** | 26.7 | 34.2 | 67.4 | 34.8 | 60.6 | 60.0 | 81.8 | 52.2 |
| **MAmmoTH2-8B** | 29.7 | 33.4 | 67.9 | 38.4 | 61.0 | 60.8 | 81.0 | 53.1 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** | 29.2 | 45.0 | 84.7 | 36.8 | 64.5 | 63.1 | 83.0 | 58.0 |
| **MAmmoTH2-8B-Plus** | 32.5 | 42.8 | 84.1 | 37.3 | 65.7 | 67.8 | 83.4 | 59.1 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
``` |
mayankchugh-learning/sft-tiny-chatbot | mayankchugh-learning | 2024-05-19T06:21:21Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:20:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
SAMMY007/sft-tiny-chatbot | SAMMY007 | 2024-05-19T06:20:12Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:18:52Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
akash-soni/sft-tiny-chatbot | akash-soni | 2024-05-19T06:16:33Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:52:13Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
sushilchikane/sft-tiny-chatbot | sushilchikane | 2024-05-19T06:14:29Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:13:10Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2 | Zoyd | 2024-05-19T06:13:44Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"arxiv:2405.03548",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-19T06:05:16Z | ---
license: mit
language:
- en
---
**Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21
| Quant | Model Size | lm_head |
| ----- | ---------- | ------- |
| [3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_0bpw_exl2) | 16960 MB | 6 |
| [3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_5bpw_exl2) | 19723 MB | 6 |
| [3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2) | 21106 MB | 6 |
| [4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2) | 22447 MB | 6 |
| [4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2) | 23877 MB | 6 |
| [5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2) | 28025 MB | 6 |
| [6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_0bpw_exl2) | 33581 MB | 8 |
| [6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_5bpw_exl2) | 36250 MB | 8 |
| [8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-8_0bpw_exl2) | 41879 MB | 8 |
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:-----------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:---------|
| **MAmmoTH2-7B** | 26.7 | 34.2 | 67.4 | 34.8 | 60.6 | 60.0 | 81.8 | 52.2 |
| **MAmmoTH2-8B** | 29.7 | 33.4 | 67.9 | 38.4 | 61.0 | 60.8 | 81.0 | 53.1 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** | 29.2 | 45.0 | 84.7 | 36.8 | 64.5 | 63.1 | 83.0 | 58.0 |
| **MAmmoTH2-8B-Plus** | 32.5 | 42.8 | 84.1 | 37.3 | 65.7 | 67.8 | 83.4 | 59.1 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
``` |
Pubudu/mbart-large-50_par_bn_rf_16_dinamina_5400 | Pubudu | 2024-05-19T06:11:17Z | 3 | 0 | adapter-transformers | [
"adapter-transformers",
"adapterhub:summarization/dinamina_5400_full_text",
"mbart",
"dataset:dinamina_5400_full_text",
"region:us"
] | null | 2024-05-19T06:10:18Z | ---
tags:
- adapterhub:summarization/dinamina_5400_full_text
- mbart
- adapter-transformers
datasets:
- dinamina_5400_full_text
---
# Adapter `Pubudu/mbart-large-50_par_bn_rf_16_dinamina_5400` for facebook/mbart-large-50
An [adapter](https://adapterhub.ml) for the `facebook/mbart-large-50` model that was trained on the [summarization/dinamina_5400_full_text](https://adapterhub.ml/explore/summarization/dinamina_5400_full_text/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("facebook/mbart-large-50")
adapter_name = model.load_adapter("Pubudu/mbart-large-50_par_bn_rf_16_dinamina_5400", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
DevonPeroutky/llama3-reddit-therapist-lora | DevonPeroutky | 2024-05-19T06:07:46Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2024-05-19T05:16:27Z | ---
library_name: peft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
hansmueller464/Llama3-Aloe-8B-Alpha-Q6_K-GGUF | hansmueller464 | 2024-05-19T06:07:00Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"biology",
"medical",
"llama-cpp",
"gguf-my-repo",
"question-answering",
"en",
"dataset:argilla/dpo-mix-7k",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/airoboros-3.2",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:LDJnr/Capybara",
"dataset:HPAI-BSC/CareQA",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:lukaemon/mmlu",
"dataset:bigbio/pubmed_qa",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"dataset:HPAI-BSC/better-safe-than-sorry",
"dataset:HPAI-BSC/pubmedqa-cot",
"dataset:HPAI-BSC/medmcqa-cot",
"dataset:HPAI-BSC/medqa-cot",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | question-answering | 2024-05-19T05:51:54Z | ---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- biology
- medical
- llama-cpp
- gguf-my-repo
datasets:
- argilla/dpo-mix-7k
- nvidia/HelpSteer
- jondurbin/airoboros-3.2
- hkust-nlp/deita-10k-v0
- LDJnr/Capybara
- HPAI-BSC/CareQA
- GBaker/MedQA-USMLE-4-options
- lukaemon/mmlu
- bigbio/pubmed_qa
- openlifescienceai/medmcqa
- bigbio/med_qa
- HPAI-BSC/better-safe-than-sorry
- HPAI-BSC/pubmedqa-cot
- HPAI-BSC/medmcqa-cot
- HPAI-BSC/medqa-cot
pipeline_tag: question-answering
---
# hansmueller464/Llama3-Aloe-8B-Alpha-Q6_K-GGUF
This model was converted to GGUF format from [`HPAI-BSC/Llama3-Aloe-8B-Alpha`](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hansmueller464/Llama3-Aloe-8B-Alpha-Q6_K-GGUF --model llama3-aloe-8b-alpha.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hansmueller464/Llama3-Aloe-8B-Alpha-Q6_K-GGUF --model llama3-aloe-8b-alpha.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama3-aloe-8b-alpha.Q6_K.gguf -n 128
```
|
moanlb/t5-small_finetuned_Informal_text-to-Formal_text | moanlb | 2024-05-19T06:03:51Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T05:37:22Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small_finetuned_Informal_text-to-Formal_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_finetuned_Informal_text-to-Formal_text
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.375
- Bleu: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 9.3669 | 1.0 | 5229 | 9.4520 | 0.0023 | 19.0 |
| 10.2293 | 2.0 | 10458 | 10.2588 | 0.1433 | 6.0 |
| 10.3618 | 3.0 | 15687 | 10.3648 | 0.0 | 0.0 |
| 10.375 | 4.0 | 20916 | 10.375 | 0.0 | 0.0 |
| 10.375 | 5.0 | 26145 | 10.375 | 0.0 | 0.0 |
| 10.375 | 6.0 | 31374 | 10.375 | 0.0 | 0.0 |
| 10.375 | 7.0 | 36603 | 10.375 | 0.0 | 0.0 |
| 10.375 | 8.0 | 41832 | 10.375 | 0.0 | 0.0 |
| 10.375 | 9.0 | 47061 | 10.375 | 0.0 | 0.0 |
| 10.375 | 10.0 | 52290 | 10.375 | 0.0 | 0.0 |
| 10.375 | 11.0 | 57519 | 10.375 | 0.0 | 0.0 |
| 10.375 | 12.0 | 62748 | 10.375 | 0.0 | 0.0 |
| 10.375 | 13.0 | 67977 | 10.375 | 0.0 | 0.0 |
| 10.375 | 14.0 | 73206 | 10.375 | 0.0 | 0.0 |
| 10.375 | 15.0 | 78435 | 10.375 | 0.0 | 0.0 |
| 10.375 | 16.0 | 83664 | 10.375 | 0.0 | 0.0 |
| 10.375 | 17.0 | 88893 | 10.375 | 0.0 | 0.0 |
| 10.375 | 18.0 | 94122 | 10.375 | 0.0 | 0.0 |
| 10.375 | 19.0 | 99351 | 10.375 | 0.0 | 0.0 |
| 10.375 | 20.0 | 104580 | 10.375 | 0.0 | 0.0 |
| 10.375 | 21.0 | 109809 | 10.375 | 0.0 | 0.0 |
| 10.375 | 22.0 | 115038 | 10.375 | 0.0 | 0.0 |
| 10.375 | 23.0 | 120267 | 10.375 | 0.0 | 0.0 |
| 10.375 | 24.0 | 125496 | 10.375 | 0.0 | 0.0 |
| 10.375 | 25.0 | 130725 | 10.375 | 0.0 | 0.0 |
| 10.375 | 26.0 | 135954 | 10.375 | 0.0 | 0.0 |
| 10.375 | 27.0 | 141183 | 10.375 | 0.0 | 0.0 |
| 10.375 | 28.0 | 146412 | 10.375 | 0.0 | 0.0 |
| 10.375 | 29.0 | 151641 | 10.375 | 0.0 | 0.0 |
| 10.375 | 30.0 | 156870 | 10.375 | 0.0 | 0.0 |
| 10.375 | 31.0 | 162099 | 10.375 | 0.0 | 0.0 |
| 10.375 | 32.0 | 167328 | 10.375 | 0.0 | 0.0 |
| 10.375 | 33.0 | 172557 | 10.375 | 0.0 | 0.0 |
| 10.375 | 34.0 | 177786 | 10.375 | 0.0 | 0.0 |
| 10.375 | 35.0 | 183015 | 10.375 | 0.0 | 0.0 |
| 10.375 | 36.0 | 188244 | 10.375 | 0.0 | 0.0 |
| 10.375 | 37.0 | 193473 | 10.375 | 0.0 | 0.0 |
| 10.375 | 38.0 | 198702 | 10.375 | 0.0 | 0.0 |
| 10.375 | 39.0 | 203931 | 10.375 | 0.0 | 0.0 |
| 10.375 | 40.0 | 209160 | 10.375 | 0.0 | 0.0 |
| 10.375 | 41.0 | 214389 | 10.375 | 0.0 | 0.0 |
| 10.375 | 42.0 | 219618 | 10.375 | 0.0 | 0.0 |
| 10.375 | 43.0 | 224847 | 10.375 | 0.0 | 0.0 |
| 10.375 | 44.0 | 230076 | 10.375 | 0.0 | 0.0 |
| 10.375 | 45.0 | 235305 | 10.375 | 0.0 | 0.0 |
| 10.375 | 46.0 | 240534 | 10.375 | 0.0 | 0.0 |
| 10.375 | 47.0 | 245763 | 10.375 | 0.0 | 0.0 |
| 10.375 | 48.0 | 250992 | 10.375 | 0.0 | 0.0 |
| 10.375 | 49.0 | 256221 | 10.375 | 0.0 | 0.0 |
| 10.375 | 50.0 | 261450 | 10.375 | 0.0 | 0.0 |
| 10.375 | 51.0 | 266679 | 10.375 | 0.0 | 0.0 |
| 10.375 | 52.0 | 271908 | 10.375 | 0.0 | 0.0 |
| 10.375 | 53.0 | 277137 | 10.375 | 0.0 | 0.0 |
| 10.375 | 54.0 | 282366 | 10.375 | 0.0 | 0.0 |
| 10.375 | 55.0 | 287595 | 10.375 | 0.0 | 0.0 |
| 10.375 | 56.0 | 292824 | 10.375 | 0.0 | 0.0 |
| 10.375 | 57.0 | 298053 | 10.375 | 0.0 | 0.0 |
| 10.375 | 58.0 | 303282 | 10.375 | 0.0 | 0.0 |
| 10.375 | 59.0 | 308511 | 10.375 | 0.0 | 0.0 |
| 10.375 | 60.0 | 313740 | 10.375 | 0.0 | 0.0 |
| 10.375 | 61.0 | 318969 | 10.375 | 0.0 | 0.0 |
| 10.375 | 62.0 | 324198 | 10.375 | 0.0 | 0.0 |
| 10.375 | 63.0 | 329427 | 10.375 | 0.0 | 0.0 |
| 10.375 | 64.0 | 334656 | 10.375 | 0.0 | 0.0 |
| 10.375 | 65.0 | 339885 | 10.375 | 0.0 | 0.0 |
| 10.375 | 66.0 | 345114 | 10.375 | 0.0 | 0.0 |
| 10.375 | 67.0 | 350343 | 10.375 | 0.0 | 0.0 |
| 10.375 | 68.0 | 355572 | 10.375 | 0.0 | 0.0 |
| 10.375 | 69.0 | 360801 | 10.375 | 0.0 | 0.0 |
| 10.375 | 70.0 | 366030 | 10.375 | 0.0 | 0.0 |
| 10.375 | 71.0 | 371259 | 10.375 | 0.0 | 0.0 |
| 10.375 | 72.0 | 376488 | 10.375 | 0.0 | 0.0 |
| 10.375 | 73.0 | 381717 | 10.375 | 0.0 | 0.0 |
| 10.375 | 74.0 | 386946 | 10.375 | 0.0 | 0.0 |
| 10.375 | 75.0 | 392175 | 10.375 | 0.0 | 0.0 |
| 10.375 | 76.0 | 397404 | 10.375 | 0.0 | 0.0 |
| 10.375 | 77.0 | 402633 | 10.375 | 0.0 | 0.0 |
| 10.375 | 78.0 | 407862 | 10.375 | 0.0 | 0.0 |
| 10.375 | 79.0 | 413091 | 10.375 | 0.0 | 0.0 |
| 10.375 | 80.0 | 418320 | 10.375 | 0.0 | 0.0 |
| 10.375 | 81.0 | 423549 | 10.375 | 0.0 | 0.0 |
| 10.375 | 82.0 | 428778 | 10.375 | 0.0 | 0.0 |
| 10.375 | 83.0 | 434007 | 10.375 | 0.0 | 0.0 |
| 10.375 | 84.0 | 439236 | 10.375 | 0.0 | 0.0 |
| 10.375 | 85.0 | 444465 | 10.375 | 0.0 | 0.0 |
| 10.375 | 86.0 | 449694 | 10.375 | 0.0 | 0.0 |
| 10.375 | 87.0 | 454923 | 10.375 | 0.0 | 0.0 |
| 10.375 | 88.0 | 460152 | 10.375 | 0.0 | 0.0 |
| 10.375 | 89.0 | 465381 | 10.375 | 0.0 | 0.0 |
| 10.375 | 90.0 | 470610 | 10.375 | 0.0 | 0.0 |
| 10.375 | 91.0 | 475839 | 10.375 | 0.0 | 0.0 |
| 10.375 | 92.0 | 481068 | 10.375 | 0.0 | 0.0 |
| 10.375 | 93.0 | 486297 | 10.375 | 0.0 | 0.0 |
| 10.375 | 94.0 | 491526 | 10.375 | 0.0 | 0.0 |
| 10.375 | 95.0 | 496755 | 10.375 | 0.0 | 0.0 |
| 10.375 | 96.0 | 501984 | 10.375 | 0.0 | 0.0 |
| 10.375 | 97.0 | 507213 | 10.375 | 0.0 | 0.0 |
| 10.375 | 98.0 | 512442 | 10.375 | 0.0 | 0.0 |
| 10.375 | 99.0 | 517671 | 10.375 | 0.0 | 0.0 |
| 10.375 | 100.0 | 522900 | 10.375 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2 | Zoyd | 2024-05-19T06:03:00Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"arxiv:2405.03548",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-19T05:55:10Z | ---
license: mit
language:
- en
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21
| Quant | Model Size | lm_head |
| ----- | ---------- | ------- |
| [3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_0bpw_exl2) | 16960 MB | 6 |
| [3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_5bpw_exl2) | 19723 MB | 6 |
| [3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2) | 21106 MB | 6 |
| [4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2) | 22447 MB | 6 |
| [4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2) | 23877 MB | 6 |
| [5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2) | 28025 MB | 6 |
| [6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_0bpw_exl2) | 33581 MB | 8 |
| [6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_5bpw_exl2) | 36250 MB | 8 |
| [8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-8_0bpw_exl2) | 41879 MB | 8 |
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:-----------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:---------|
| **MAmmoTH2-7B** | 26.7 | 34.2 | 67.4 | 34.8 | 60.6 | 60.0 | 81.8 | 52.2 |
| **MAmmoTH2-8B** | 29.7 | 33.4 | 67.9 | 38.4 | 61.0 | 60.8 | 81.0 | 53.1 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** | 29.2 | 45.0 | 84.7 | 36.8 | 64.5 | 63.1 | 83.0 | 58.0 |
| **MAmmoTH2-8B-Plus** | 32.5 | 42.8 | 84.1 | 37.3 | 65.7 | 67.8 | 83.4 | 59.1 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
``` |
Vlad1m/toxicity_analyzer | Vlad1m | 2024-05-19T06:02:15Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T05:58:38Z | ---
license: apache-2.0
---
|
Avik812/sft-tiny-chatbot | Avik812 | 2024-05-19T06:02:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:00:46Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
baroniaadarsh/sft-tiny-chatbot | baroniaadarsh | 2024-05-19T05:59:48Z | 0 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:58:19Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
PraveenCMR/sft-tiny-chatbot | PraveenCMR | 2024-05-19T05:57:21Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:56:00Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Divyaamith/sft-tiny-chatbot | Divyaamith | 2024-05-19T05:53:50Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:52:25Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Reenal/sft-tiny-chatbot | Reenal | 2024-05-19T05:53:49Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:52:29Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
srikmc2702/sft-tiny-chatbot | srikmc2702 | 2024-05-19T05:52:44Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:51:26Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
adas100/sft-tiny-chatbot | adas100 | 2024-05-19T05:51:21Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:50:02Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
bhassi01/sft-tiny-chatbot | bhassi01 | 2024-05-19T05:50:58Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:49:30Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
rgsubramaniam/sft-tiny-chatbot | rgsubramaniam | 2024-05-19T05:50:24Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:49:06Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
JapiKredi/sft-tiny-chatbot | JapiKredi | 2024-05-19T05:50:06Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T05:48:46Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-lr-1-1e-6 | CMU-AIR2 | 2024-05-19T05:38:02Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T21:31:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-lr-1.5e-6 | CMU-AIR2 | 2024-05-19T05:37:43Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T06:33:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-lr-1e-6 | CMU-AIR2 | 2024-05-19T05:37:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T09:33:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/leveldevai_-_TurdusBeagle-7B-4bits | RichardErkhov | 2024-05-19T05:33:20Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-19T05:30:01Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TurdusBeagle-7B - bnb 4bits
- Model creator: https://huggingface.co/leveldevai/
- Original model: https://huggingface.co/leveldevai/TurdusBeagle-7B/
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- udkai/Turdus
- mlabonne/NeuralBeagle14-7B
---
# TurdusBeagle-7B
TurdusBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Turdus
layer_range: [0, 32]
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.45 # fallback for rest of tensors
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "leveldevai/TurdusBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nickrwu/roberta-mqa | nickrwu | 2024-05-19T05:30:03Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-05-17T09:42:45Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-mqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-mqa
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4631
- Accuracy: 0.3793
- F1: 0.3774
- Precision: 0.3819
- Recall: 0.3760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.5076 | 1.0 | 1061 | 1.4901 | 0.3372 | 0.3328 | 0.3366 | 0.3321 |
| 1.4244 | 2.0 | 2122 | 1.4584 | 0.3594 | 0.3560 | 0.3615 | 0.3545 |
| 1.3553 | 3.0 | 3183 | 1.4631 | 0.3793 | 0.3774 | 0.3819 | 0.3760 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
sirgecko/finetune-language-detectionnn | sirgecko | 2024-05-19T05:27:59Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:ivanlau/language-detection-fine-tuned-on-xlm-roberta-base",
"base_model:adapter:ivanlau/language-detection-fine-tuned-on-xlm-roberta-base",
"region:us"
] | null | 2024-05-19T05:25:25Z | ---
library_name: peft
base_model: ivanlau/language-detection-fine-tuned-on-xlm-roberta-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
redponike/Smaug-Llama-3-70B-Instruct-GGUF | redponike | 2024-05-19T05:27:49Z | 1 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T00:31:42Z | GGUF quants of [abacusai/Smaug-Llama-3-70B-Instruct](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) |
ebowwa/informational_substrate_with_people_profilesv0.1ChatML | ebowwa | 2024-05-19T05:24:27Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T05:24:17Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** ebowwa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LeoZZzzZZ/bert-tiny-finetuned-fact | LeoZZzzZZ | 2024-05-19T05:23:06Z | 64 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T05:14:28Z | ---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_keras_callback
model-index:
- name: LeoZZzzZZ/bert-tiny-finetuned-fact
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LeoZZzzZZ/bert-tiny-finetuned-fact
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2093
- Validation Loss: 1.1000
- Train Accuracy: 0.3913
- Train Precision: 0.1531
- Train Recall: 0.3913
- Train F1: 0.2201
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.02, 'decay_steps': 11870, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Recall | Train F1 | Epoch |
|:----------:|:---------------:|:--------------:|:---------------:|:------------:|:--------:|:-----:|
| 1.2460 | 1.1773 | 0.3708 | 0.1375 | 0.3708 | 0.2006 | 0 |
| 1.2093 | 1.1000 | 0.3913 | 0.1531 | 0.3913 | 0.2201 | 1 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlikS/a2c-PandaPickAndPlace-v3 | AlikS | 2024-05-19T05:21:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-19T05:17:44Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mirlab/AkaLlama-llama3-70b-v0.1 | mirlab | 2024-05-19T05:20:27Z | 24 | 24 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"akallama",
"conversational",
"ko",
"en",
"arxiv:2403.07691",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-04T02:55:12Z | ---
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
- ko
- en
tags:
- meta
- llama
- llama-3
- akallama
library_name: transformers
---
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/>
</a>
# AKALLAMA
AkaLlama is a series of Korean language models designed for practical usability across a wide range of tasks.
The initial model, AkaLlama-v0.1, is a fine-tuned version of Meta-Llama-3-70b-Instruct. It has been trained on a custom mix of publicly available datasets curated by the MIR Lab.
Our goal is to explore cost-effective ways to adapt high-performing LLMs for specific use cases, such as different languages (e.g., Korean) or domains (e.g., organization-specific chatbots).
For details, check out [our project page](https://yonsei-mir.github.io/AkaLLaMA-page).
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
- **Language(s) (NLP):** Korean, English
- **License:** llama3
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
## How to use
This repo provides full model weight files for AkaLlama-70B-v0.1.
### Quantized Weights
| Method | repo |
| :----: | :----: |
| [GGUF](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md) | https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF |
| [ExLlamaV2](https://github.com/turboderp/exllamav2) | https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-exl2 |
# Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "mirlab/AkaLlama-llama3-70b-v0.1"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
system_prompt = """당신은 연세대학교 멀티모달 연구실 (MIR lab) 이 만든 대규모 언어 모델인 AkaLlama (아카라마) 입니다.
다음 지침을 따르세요:
1. 사용자가 별도로 요청하지 않는 한 항상 한글로 소통하세요.
2. 유해하거나 비윤리적, 차별적, 위험하거나 불법적인 내용이 답변에 포함되어서는 안 됩니다.
3. 질문이 말이 되지 않거나 사실에 부합하지 않는 경우 정답 대신 그 이유를 설명하세요. 질문에 대한 답을 모른다면 거짓 정보를 공유하지 마세요.
4. 안전이나 윤리에 위배되지 않는 한 사용자의 모든 질문에 완전하고 포괄적으로 답변하세요."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "네 이름은 뭐야?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
# 내 이름은 AkaLlama입니다! 나는 언어 모델로, 사용자와 대화하는 데 도움을 주기 위해 만들어졌습니다. 나는 다양한 주제에 대한 질문에 답하고, 새로운 아이디어를 제공하며, 문제를 해결하는 데 도움이 될 수 있습니다. 사용자가 원하는 정보나 도움을 받도록 최선을 다할 것입니다!
```
## Evaluation
| Model | Precision | LogicKor single turn* | LogicKor multi turn* | Overall* |
|:------------------------:|:---------:|:--------------------:|:---------------------:|:--------:|
| AkaLlama-llama3-70b-v0.1 | float64 | 6.873 | 7.207 | 7.04 |
<!--
| AkaLlama-llama3-70b-v0.1 | bf16 | 6.873 | 7.206 | 7.04 |
-->
*mean over 3 random seeds
## Training Details
### Training Procedure
We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691).
Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes.
Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template.
### Training Data
Detailed descriptions regarding training data will be announced later.
### Examples
<!--<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (8).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
너는 누구이고 어떤 일을 할 수 있어?
Model:
저는 인공지능 언어 모델인 AkaLlama(아카라마)입니다. 저는 다양한 주제와 형식의 텍스트를 생성할 수 있으며, 질문에 답하고, 대화를 진행하며, 이야기를 만들 수 있습니다. 또한 사용자가 지정한 주제나 장르에 따라 창의적인 글쓰기나 편집 작업을 수행할 수도 있습니다.
</pre>
<details>
<summary><b>Math Solving[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (9).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
아래 문제를 깊게 생각하면서 풀어봐
연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까?
Model:
아래 문제를 깊게 생각하면서 풀어봅시다:
연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까?
풀이:
1. 연필 5 타를 1 개씩 나눠서 몇 명에게 나누어 줄 수 있는지 먼저 구합니다.
- 각 연필에는 12개의 연필이 포함되어 있습니다. 따라서 총 12 * 5 = 60개입니다.
- 60개를 1명당 1개씩 나누면 60명의 학생들에게 나눠줄 수 있습니다.
2. 지우개 45 개를 1 개씩 나누어 몇 명에게 나누어 줄 수 있는지 구합니다.
- 45 개를 1 명당 1 개씩 나누면 45 명의 학생들에게 나눠줄 수 있습니다.
3. 연필과 지우개의 두 가지 물품을 함께 고려하여 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법을 찾습니다.
- 연필로만 60명, 지우개로만 45명의 학생에게 나눠줄 수 있으므로, 두 종류의 문구를 함께 고려할 때는 이보다 적은 수의 학생에게 나눠주어야 합니다.
- 이 중에서 공통인수(common divisor)를 찾아서 이를 최소 단위로 설정하면 됩니다.
- 60과 45의 최대공약수(GCD)는 15입니다.
4. 연필과 지우개 각각을 GCD(15)로 나누어 몇 개씩 나눠줄 수 있는지 구합니다.
- 연필은 60 / 15 = 4개씩, 총 4 * 12 = 48개의 연필이 필요합니다.
- 지우개는 45 / 15 = 3개씩, 총 3 * 1 = 3개의 지우개가 필요합니다.
5. 최종적으로, 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법은 연필과 지우개를 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 것입니다.
- 이 경우 총 48 + 3 = 51개의 물품을 사용하여 15명의 학생들에게 똑같이 나눠줄 수 있습니다.
결론: 연필과 지우개 각각을 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 방법으로 최대 15명의 학생들에게 똑같이 나누어 줄 수 있습니다.
</pre>
</details>
<details>
<summary><b>Writting[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (13).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
아래 문장을 보고 문법적으로 틀린부분을 지적해줘
그는 서슴치 않고 비평한다.
Model:
서슴치"라는 단어는 "서슴"이라는 동사와 "치"라는 접미사를 사용하여 만든 단어입니다. 이 단어는 "서슴"의 의미를 반복적으로 표현하는 데 사용되는 조동사적 용법으로, "서슴"이란 동사의 반복적인 행동을 강조합니다. 따라서 문장에서 "서슴치"라는 단어는 중복적으로 사용되어 문법적으로 틀린 부분입니다.
정답은 다음과 같습니다:
그는 서슴없이 비평한다.
</pre>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (7).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
윤동주 시인의 "별 헤는 밤"을 보고 당시 시대상을 고려해서 해석해봐. 그리고 이를 작가의 독백형식으로 다시 작성해봐.
Model:
윤동주는 1941년 일본 제국주의 치하에서 이 작품을 쓰면서, 강압적인 상황 속에서도 자신의 정체성과 문화에 대한 애착을 표현하고자 했습니다. 그는 하늘에 가득 찬 별들을 보며 그 속에 하나 둘 새겨지는 별들이 있다고 표현했습니다. 이러한 별들은 각각 추억, 사랑, 쓸쓸함, 동경, 시 등 다양한 감정을 담고 있습니다.
윤동주는 이 별들 중 하나에 어머니를 부르며 멀리 떨어져 있는 것을 그리워하며, 자신의 이름자를 언덕 위에 쓰고 흙으로 덮어 버린다고 말합니다. 이러한 행동은 자신의 정체성을 확인하고, 잃어버린 것을 되찾으려는 노력으로 볼 수 있습니다.
윤동주는 또한 자신이 죽고 나면 무덤 위에 파란 잔디가 피어나듯이 자신의 이름자 묻힌 언덕 위에도 풀이 무성할 것이라고 말합니다. 이는 당시의 사회상과 작가의 내면을 반영하고 있으며, 한국인의 정체성을 확인하고 잃어버린 것을 되찾으려는 노력, 그리고 죽음 이후에 자신의 존재를 남겨두고 싶다는 바람을 담고 있습니다.
이 시를 작가의 독백형식으로 다시 작성하면 다음과 같습니다:
"저는 이 별들이 너무나 많은 것 같아요. 하나 둘 새겨지는 별들 중에는 추억도 있고, 사랑도 있고, 쓸쓸함과 동경도 있어요. 그리고 어머니, 당신은 멀리 북간도에 계신 건가요? 저는 당신을 부르고 싶어요.
저의 이름자는 언덕 위에 쓰고 흙으로 덮어 버렸어요. 딴은 밤을 새워 우는 벌레처럼 부끄러운 이름을 슬퍼하는 이유가 있어요. 하지만 겨울이 지나고 나의 별에도 봄이 오면, 무덤 위에 파란 잔디가 피어나듯이 저의 이름자 묻힌 언덕 위에서도 풀이 무성할 거예요.
저는 이 별들을 보고, 자신의 정체성을 확인하고 싶어요. 잃어버린 것을 되찾으려는 노력을 하고 싶어요. 죽음 이후에도 자신의 존재를 남겨두고 싶어요."
</pre>
</details>
<details>
<summary><b>Logical Reasoning[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (15).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
세종대왕의 맥북프로 던짐 사건에 대해 설명해줘
Model:
세종대왕이 맥북프로를 던진 사건은 사실무근한 이야기입니다. 세종대왕은 조선 제4대 국왕으로 1418년부터 1450년까지 재위했습니다. 그는 과학, 문학, 예술 등 다양한 분야에 관심을 갖고 연구하였으며, 특히 한글 창제와 천문학에 큰 공헌을 했습니다.
한편 맥북프로는 애플이 생산하는 노트북 컴퓨터로, 세종대왕 시대의 기술로는 존재할 수 없는 제품입니다. 따라서 세종대왕이 맥북프로를 던진 사건은 사실과 맞지 않는 이야기가 될 것입니다.
</pre>
</details>
<details>
<summary><b>Coding [CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (11).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
PyTorch를 사용해서 간단한 CNN을 구현해봐.
Model:
네, 물론이에요! PyTorch를 사용하여 간단한 CNN을 구현하는 예제 코드를 보여드릴게요.
우선, 필요한 라이브러리를 가져와야 해요. PyTorch의 `torch`와 `nn` 모듈이 필요해요.
```python
import torch\nimport torch.nn as nn
```
그 다음, 간단한 CNN 모델을 정의할 거예요. 이 경우에는 2개의 컨볼루션 레이어와 3개의 풀링 레이어로 구성된 모델이에요.
```python
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
def forward(self, x):
x = self.pool(torch.relu(self.conv1(x)))
x = self.pool(torch.relu(self.conv2(x)))
return x
```
이제 모델을 초기화하고, 입력 데이터를 준비해야 해요. 여기서는 임의의 28x28 크기의 이미지 데이터를 생성했어요.
```python
model = SimpleCNN()
input_data = torch.randn(1, 1, 28, 28)
```
마지막으로, 모델에 입력 데이터를 넣어 출력을 확인할 수 있어요.
```python
output = model(input_data)
print(output.shape)
```
이렇게 하면 간단한 CNN 모델을 PyTorch를 사용해 구현하고, 입력 데이터를 통해 출력을 확인할 수 있어요. 물론 실제로는 더 많은 레이어나 파라미터 튜닝 등이 필요하지만, 이 코드는 기본적인 CNN 구조를 보여주는 예제일 뿐이에요. 더 궁금한 점이 있으면 언제든지 물어봐 주세요!
</pre>
</details>
You can find more examples at [our project page](https://yonsei-mir.github.io/AkaLLaMA-page)
## Warning
Although AKALlama-70B has significant potential, its responses can sometimes be inaccurate, biased, or misaligned, presenting risks if used without additional testing and refinement. Furthermore, the quality of the model's output is greatly influenced by the system prompt and decoding strategy. Changes in these areas could result in less precise outputs. Therefore, we strongly recommend handling our model with considerable caution.
## Citation
If you find our model useful in your work, please cite the model as follows:
@misc{akallama,
author = {Chung, Jiwan and Jeon, Jaehyun and Kim, Saejin and Lim, Seungwon and Oh, Giyeong and Son, Yejin and Yu, Youngjae},
title = {AkaLlama: Yonsei University Large Language Model Project},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1}},
}
## Contact
We look forward for your feedback and welcome collaboration on this exciting project!
### Contributors
- [YoungJaeYu](https://yj-yu.github.io/home/)
- [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
## Special Thanks
- Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources
## Acknowledgement
- Title image generated by DALL·E 3 |
NikolayKozloff/Master-Yi-9B-Q4_0-GGUF | NikolayKozloff | 2024-05-19T05:15:29Z | 6 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T05:15:16Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Master-Yi-9B-Q4_0-GGUF
This model was converted to GGUF format from [`qnguyen3/Master-Yi-9B`](https://huggingface.co/qnguyen3/Master-Yi-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/qnguyen3/Master-Yi-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Master-Yi-9B-Q4_0-GGUF --model master-yi-9b.Q4_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Master-Yi-9B-Q4_0-GGUF --model master-yi-9b.Q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m master-yi-9b.Q4_0.gguf -n 128
```
|
ninyx/Mistral-7B-Instruct-v0.2-advisegpt-v0.3 | ninyx | 2024-05-19T05:08:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T06:32:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
metrics:
- bleu
- rouge
model-index:
- name: Mistral-7B-Instruct-v0.2-advisegpt-v0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-advisegpt-v0.3
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0862
- Bleu: {'bleu': 0.9549627224896852, 'precisions': [0.9768137794223292, 0.9601226611596732, 0.9485784293167555, 0.9390826620297074], 'brevity_penalty': 0.9988666836798081, 'length_ratio': 0.998867325397811, 'translation_length': 1126143, 'reference_length': 1127420}
- Rouge: {'rouge1': 0.9750644838957752, 'rouge2': 0.9567876902653232, 'rougeL': 0.9732849754530062, 'rougeLsum': 0.9746665365645586}
- Exact Match: {'exact_match': 0.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Exact Match | Validation Loss | Rouge |
|:-------------:|:------:|:----:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------:|
| 0.0708 | 0.9991 | 907 | {'bleu': 0.9446206529600942, 'brevity_penalty': 0.9986864062670457, 'length_ratio': 0.9986872682762413, 'precisions': [0.9714789420395403, 0.9503978305171663, 0.9365333504686326, 0.9256591913728304], 'reference_length': 1127420, 'translation_length': 1125940} | {'exact_match': 0.0} | 0.1052 | {'rouge1': 0.9694819646914797, 'rouge2': 0.9464199252414252, 'rougeL': 0.9665470510722093, 'rougeLsum': 0.9687792447488508} |
| 0.0611 | 1.9991 | 1814 | 0.0878 | {'bleu': 0.9535151066703249, 'precisions': [0.9762399786139381, 0.9589412451791418, 0.9470130412549163, 0.9372328452904729], 'brevity_penalty': 0.9987103859226171, 'length_ratio': 0.998711216760391, 'translation_length': 1125967, 'reference_length': 1127420}| {'rouge1': 0.9743797099363829, 'rouge2': 0.9554568193403455, 'rougeL': 0.9724812167922234, 'rougeLsum': 0.9739500654981077}| {'exact_match': 0.0} |
| 0.051 | 2.9982 | 2721 | 0.0862 | {'bleu': 0.9549627224896852, 'precisions': [0.9768137794223292, 0.9601226611596732, 0.9485784293167555, 0.9390826620297074], 'brevity_penalty': 0.9988666836798081, 'length_ratio': 0.998867325397811, 'translation_length': 1126143, 'reference_length': 1127420}| {'rouge1': 0.9750644838957752, 'rouge2': 0.9567876902653232, 'rougeL': 0.9732849754530062, 'rougeLsum': 0.9746665365645586}| {'exact_match': 0.0} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
coolguyleo/results-20 | coolguyleo | 2024-05-19T04:51:04Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-19T04:50:51Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
Jiahuixu/occt5 | Jiahuixu | 2024-05-19T04:27:10Z | 52 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-02T05:09:57Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {t5-occ}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5130 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
asiansoul/llama-3-Korean-Bllossom-120B-GGUF | asiansoul | 2024-05-19T04:22:58Z | 0 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"base_model:Bllossom/llama-3-Korean-Bllossom-70B",
"base_model:finetune:Bllossom/llama-3-Korean-Bllossom-70B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T23:56:01Z | ---
base_model:
- Bllossom/llama-3-Korean-Bllossom-70B
library_name: transformers
tags:
- mergekit
- merge
---
🌋🌋 Huggingface Upload Issue
Maximum individual file size is 50.0GB to upload huggingface.
To clear it, Split the file into part_aa, part_ab, part_ac chunks as my "Practical Idea".
After you download this repo on your folder path, command like this.
Download from Huggingface (change your download path, in this case "./")
```
huggingface-cli download asiansoul/llama-3-Korean-Bllossom-120B-GGUF --local-dir='./'
```
Merge split files into one gguf file (in this case, run this on "./" download path)
```
cat part_* > llama-3-korean-bllossom-120b-Q4_K_M.gguf
```
I thought uploading it as a GGUF rather than a simple original file was for your benefit, so I'm uploading it like this even if it takes a bit of trouble.
```
Perhaps this will be the first GGUF model to upload such a large GGUF file of over 50GB to huggingface?
Other 120B model for the individual file size is under 50GB, That is why it can be uploaded.
Sometimes we need to use a trick called chunks.
```
Please wait to upload.....
### 🇰🇷 About the JayLee "AsianSoul"
```
"A leader who can make you rich 💵 !!!"
"Prove yourself with actual results, not just saying I know more than you!!!"
```
<a href="https://ibb.co/4g2SJVM"><img src="https://i.ibb.co/PzMWt64/Screenshot-2024-05-18-at-11-08-12-PM.png" alt="Screenshot-2024-05-18-at-11-08-12-PM" border="0"></a>
### About this model storytelling
This is a 128B model based on [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
☕ I started this Korean 120B model merge while drinking an iced Americano at Starbucks referring to other [Cognitive Computations 120B](https://huggingface.co/cognitivecomputations/MegaDolphin-120b).
If you walk around Starbucks in Seoul, Korea, you will see someone creating a merge and an application based on it.
At that time, please come up to me and say "hello".
"Also, if you want to create the Application project you want and provide me with support, I will create the entire architecture for you whatever it is."
🏎️ I am a person whose goal is to turn the great results created by great genius scientists & groups around the world into profitable ones.
```
My role model is J. Robert Oppenheimer!!!
J. Robert Oppenheimer is highly regarded for his ability to gather and lead a team of brilliant scientists, merging their diverse expertise and efforts towards a common goal.
```
[Learn more about J. Robert Oppenheimer](https://en.wikipedia.org/wiki/J._Robert_Oppenheimer).
I hope this 120B is a helpful model for your future.
```
🌍 Collaboration is always welcome 🌍
👊 You can't beat these giant corporations & groups alone and you can never become rich.
Now we have to come together.
People who can actually become rich together, let's collaborate with me.!!! 🍸
```
```
About Bllossom/llama-3-Korean-Bllossom-70B
- Full model released in Korean over 100GB by Blossom team
- First in Korean! Expansion of Korean vocabulary to over 30,000 words
- Capable of processing Korean context that is approximately 25% longer than Llama3
- Connecting Korean-English knowledge using the Korean-English Parallel Corpus (pre-study)
- Fine tuning using data produced by linguists considering Korean culture and language
- Reinforcement learning
🛰️ About asiansoul/llama-3-Korean-Bllossom-120B-GGUF
- Q4_K_M : On a GPU with 68GB / more OR a CPU with 68G / more memory
- More Quantization ones i hope to upload, but your computer won't be able to handle it then. you know what i mean!!
```
### Models Merged
The following models were included in the merge:
* [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
### Ollama
Check the information indicated above and run it when your computer is ready.
🥶 Otherwise, your computer will freeze.
* Create
```
ollama create Bllossom -f ./Modelfile_Q4_K_M
```
* MODELFILE (you can change this for your preference)
```
FROM ./llama-3-korean-bllossom-120b-Q4_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """
당신은 유용한 AI 어시스턴트입니다. 사용자의 질의에 대해 친절하고 정확하게 답변해야 합니다.
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.
"""
PARAMETER num_ctx 1024
PARAMETER num_keep 24
PARAMETER temperature 0.6
PARAMETER top_p 0.9
PARAMETER num_predict 2048
PARAMETER num_thread 20
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [10, 30]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [20, 40]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [30, 50]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [40, 60]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [50, 70]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [60, 80]
model: Bllossom/llama-3-Korean-Bllossom-70B
merge_method: passthrough
dtype: float16
``` |
cminja/Phi-3-mini-128k-instruct-Q4_K_M-GGUF | cminja | 2024-05-19T04:21:41Z | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-19T04:21:34Z | ---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# cminja/Phi-3-mini-128k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cminja/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --model phi-3-mini-128k-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cminja/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --model phi-3-mini-128k-instruct.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q4_K_M.gguf -n 128
```
|
ihebMissaoui/inssuranceChatGptV31024ctxr32batch6epochs6 | ihebMissaoui | 2024-05-19T04:20:46Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T02:59:23Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/tinyllama-bnb-4bit
model-index:
- name: inssuranceChatGptV31024ctxr32batch6epochs6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inssuranceChatGptV31024ctxr32batch6epochs6
This model is a fine-tuned version of [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 12
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 |
hanyundudddd/FinetunedModel_MovieReview_SentimentAnalysis | hanyundudddd | 2024-05-19T04:12:16Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T04:09:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlikS/a2c-PandaReachDense-v3 | AlikS | 2024-05-19T04:05:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-19T04:01:34Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chitb/starcoder_v2_null | chitb | 2024-05-19T04:00:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T04:00:52Z | ---
license: apache-2.0
---
|
STomoya/caformer_m36.st_safebooru_1k | STomoya | 2024-05-19T04:00:32Z | 15 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-05-19T04:00:11Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for caformer_m36.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.8046851514745909|0.5213825354450625|0.6103918807791574|
|
nickrwu/roberta-large-finetuned-race | nickrwu | 2024-05-19T04:00:22Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-05-19T03:59:52Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-large-finetuned-race
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-race
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5249
- Accuracy: 0.3094
- F1: 0.3011
- Precision: 0.3292
- Recall: 0.3011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.5519 | 1.1310 | 1200 | 1.5674 | 0.2998 | 0.2964 | 0.3018 | 0.2954 |
| 1.5254 | 2.2620 | 2400 | 1.5249 | 0.3094 | 0.3011 | 0.3292 | 0.3011 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
HaileyStorm/chess-mamba-vs-xformer | HaileyStorm | 2024-05-19T03:56:18Z | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | 2024-03-12T06:59:54Z | ---
license: mit
---
For an explanation of this project and the models trained for it, please see the [Report](Report/REPORT.md).
The root folder contains scripts for dataset preprocessing.
[chess-mamba-vs-xformer](../../tree/main/chess-mamba-vs-xformer/) contains the training scripts.
Config files, used to set model configuration and training hyperameters, are in [chess-mamba-vs-xformer/config](../../tree/main/chess-mamba-vs-xformer/config).
Model checkpoints are in [chess-mamba-vs-xformer/out](../../tree/main/chess-mamba-vs-xformer/out). The last checkpoint for completed models (e.g. Mamba and Transformer 50M) are .../anneal/anneal_complete.pt.
[chess-gpt-eval](../../tree/main/chess-gpt-eval/) has the scripts for model evaluation - playings games against Stockfish or lc0 chess engines. The logs folder contains raw evaluation metrics.
[chess-gpt-eval-contrastive](../../tree/main/chess-gpt-eval-contrastive/) likewise has the scripts for model evaluation, but modified for training and evaluation of contrastive activation and linear probes. The logs folder again contains raw evaluation metrics. |
leondu/Models-RoBERTa-1716084783.142457 | leondu | 2024-05-19T03:51:33Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T02:14:50Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Models-RoBERTa-1716084783.142457
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Models-RoBERTa-1716084783.142457
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4856
- Accuracy: 0.8199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5683 | 1.6384 | 2048 | 0.4856 | 0.8199 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vuongnhathien/swin-base-30vn | vuongnhathien | 2024-05-19T03:43:25Z | 150 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:vuongnhathien/swin-base-30vn",
"base_model:finetune:vuongnhathien/swin-base-30vn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T15:08:58Z | ---
license: apache-2.0
base_model: vuongnhathien/swin-base-30vn
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: swin-base-30vn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-30vn
This model is a fine-tuned version of [vuongnhathien/swin-base-30vn](https://huggingface.co/vuongnhathien/swin-base-30vn) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
fzzhang/mistralv1_dora_r16_25e5_e05_merged | fzzhang | 2024-05-19T03:41:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T03:38:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jacoboggleon/llama-3-8b-bnb-4bit-SFT | jacoboggleon | 2024-05-19T03:35:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T03:34:57Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** jacoboggleon
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
harshamiddela/biogpt-disease-ner | harshamiddela | 2024-05-19T03:34:29Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"base_model:finetune:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-19T02:58:33Z | ---
license: mit
base_model: microsoft/biogpt
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biogpt-disease-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-disease-ner
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1525
- Precision: 0.5241
- Recall: 0.6075
- F1: 0.5628
- Accuracy: 0.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.27 | 1.0 | 593 | 0.1769 | 0.4632 | 0.4523 | 0.4577 | 0.9395 |
| 0.1581 | 2.0 | 1186 | 0.1561 | 0.5337 | 0.5083 | 0.5207 | 0.9475 |
| 0.1225 | 3.0 | 1779 | 0.1525 | 0.5241 | 0.6075 | 0.5628 | 0.9522 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
fzzhang/mistralv1_dora_r32_25e5_e05_merged | fzzhang | 2024-05-19T03:28:30Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T03:25:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/PracticeLLM_-_SOLAR-tail-10.7B-Merge-v1.0-8bits | RichardErkhov | 2024-05-19T03:20:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-19T03:10:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SOLAR-tail-10.7B-Merge-v1.0 - bnb 8bits
- Model creator: https://huggingface.co/PracticeLLM/
- Original model: https://huggingface.co/PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0/
Original model description:
---
language:
- en
- ko
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
model-index:
- name: SOLAR-tail-10.7B-Merge-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
---
# **SOLAR-tail-10.7B-Merge-v1.0**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
- [Yhyu13/LMCocktail-10.7B-v1](Yhyu13/LMCocktail-10.7B-v1)
**Merge config**
```
slices:
- sources:
- model: upstage/SOLAR-10.7B-v1.0
layer_range: [0, 48]
- model: Yhyu13/LMCocktail-10.7B-v1
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: float16
```
# **Model Benchmark**
## Open Ko leaderboard
- Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 |
| --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 48.32 | 45.73 | 56.97 | 38.77 | 38.75 | 61.16 |
| jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 |
- Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 71.68 | 66.13 | 86.54 | **66.52** | 60.57 | **84.77** | **65.58** |
| kyujinpy/Sakura-SOLAR-Instruct | **74.40** | **70.99** | **88.42** | 66.33 | **71.79** | 83.66 | 65.20 |
## lm-evaluation-harness
```
gpt2 (pretrained=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5021|± |0.0133|
| | |macro_f1|0.3343|± |0.0059|
|kobest_copa | 0|acc |0.6220|± |0.0153|
| | |macro_f1|0.6217|± |0.0154|
|kobest_hellaswag| 0|acc |0.4380|± |0.0222|
| | |acc_norm|0.5380|± |0.0223|
| | |macro_f1|0.4366|± |0.0222|
|kobest_sentineg | 0|acc |0.4962|± |0.0251|
| | |macro_f1|0.3316|± |0.0113|
```
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PracticeLLM__SOLAR-tail-10.7B-Merge-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.68|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |86.54|
|MMLU (5-Shot) |66.52|
|TruthfulQA (0-shot) |60.57|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |65.58|
|
madgrizzle/starsnatched-MemGPT-DPO-MoE-test-exl2 | madgrizzle | 2024-05-19T03:18:05Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"MemGPT",
"function",
"function calling",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-19T00:48:07Z | ---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- MemGPT
- function
- function calling
---
This is a test release of DPO version of [MemGPT](https://github.com/cpacker/MemGPT) Language Model.
This model has been quantized using exllama for 8-bits.
# Model Description
This repository contains a MoE (Mixture of Experts) model of [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It has 2 experts per token. This model is specifically designed for function calling in MemGPT. It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium-length context, up to sequences of 8,192
# Prompt Format
This model uses **ChatML** prompt format:
```
<|im_start|>system
{system_instruction}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
```
# Usage
This model is designed to be ran on multiple backends, such as [oogabooga's textgen WebUI](https://github.com/oobabooga/text-generation-webui).
Simply install your preferred backend, and then load up this model.
Then, configure MemGPT using `memgpt configure`, and chat with MemGPT via `memgpt run` command!
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Risks
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce. |
RichardErkhov/PracticeLLM_-_SOLAR-tail-10.7B-Merge-v1.0-4bits | RichardErkhov | 2024-05-19T03:09:54Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-19T03:05:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SOLAR-tail-10.7B-Merge-v1.0 - bnb 4bits
- Model creator: https://huggingface.co/PracticeLLM/
- Original model: https://huggingface.co/PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0/
Original model description:
---
language:
- en
- ko
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
model-index:
- name: SOLAR-tail-10.7B-Merge-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
---
# **SOLAR-tail-10.7B-Merge-v1.0**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
- [Yhyu13/LMCocktail-10.7B-v1](Yhyu13/LMCocktail-10.7B-v1)
**Merge config**
```
slices:
- sources:
- model: upstage/SOLAR-10.7B-v1.0
layer_range: [0, 48]
- model: Yhyu13/LMCocktail-10.7B-v1
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: float16
```
# **Model Benchmark**
## Open Ko leaderboard
- Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 |
| --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 48.32 | 45.73 | 56.97 | 38.77 | 38.75 | 61.16 |
| jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 |
- Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 71.68 | 66.13 | 86.54 | **66.52** | 60.57 | **84.77** | **65.58** |
| kyujinpy/Sakura-SOLAR-Instruct | **74.40** | **70.99** | **88.42** | 66.33 | **71.79** | 83.66 | 65.20 |
## lm-evaluation-harness
```
gpt2 (pretrained=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5021|± |0.0133|
| | |macro_f1|0.3343|± |0.0059|
|kobest_copa | 0|acc |0.6220|± |0.0153|
| | |macro_f1|0.6217|± |0.0154|
|kobest_hellaswag| 0|acc |0.4380|± |0.0222|
| | |acc_norm|0.5380|± |0.0223|
| | |macro_f1|0.4366|± |0.0222|
|kobest_sentineg | 0|acc |0.4962|± |0.0251|
| | |macro_f1|0.3316|± |0.0113|
```
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PracticeLLM__SOLAR-tail-10.7B-Merge-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.68|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |86.54|
|MMLU (5-Shot) |66.52|
|TruthfulQA (0-shot) |60.57|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |65.58|
|
DaYin/llava-v1.5-7b_fgvc_baseline | DaYin | 2024-05-19T02:46:14Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:34:13Z | ---
license: apache-2.0
---
|
Rhma/LlamaMigr10 | Rhma | 2024-05-19T02:45:26Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:41:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
netcat420/MFANN3bv0.6 | netcat420 | 2024-05-19T02:37:32Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-02T04:17:23Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
MFANN 3b version 0.6

fine-tuned on the MFANN dataset as it stands on 5/2/2024 as it is an ever changing and expaning dataset.
benchmark results for this 3b model:
64.34 <-- Average
62.63 <-- Arc
77.1 <-- HellaSwag
58.43 <-- MMLU
51.71 <-- TruthfulQA
74.66 <-- Winogrande
61.49 <-- GSM8K
currently the worlds best 2.78B parameter model!!!!!!!!!!! as of 5/2/2024 |
asiansoul/llama-3-Korean-Bllossom-120B | asiansoul | 2024-05-19T02:29:46Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Bllossom/llama-3-Korean-Bllossom-70B",
"base_model:finetune:Bllossom/llama-3-Korean-Bllossom-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T07:24:21Z | ---
base_model:
- Bllossom/llama-3-Korean-Bllossom-70B
library_name: transformers
tags:
- mergekit
- merge
---
### 🇰🇷 About the JayLee "AsianSoul"
```
"A leader who can make you rich 💵 !!!"
"Prove yourself with actual results, not just saying I know more than you!!!"
```
<a href="https://ibb.co/4g2SJVM"><img src="https://i.ibb.co/PzMWt64/Screenshot-2024-05-18-at-11-08-12-PM.png" alt="Screenshot-2024-05-18-at-11-08-12-PM" border="0"></a>
### About this model
This is a 128B model based on [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
☕ I started this Korean 120B model merge while drinking an iced Americano at Starbucks referring to other [Cognitive Computations 120B](https://huggingface.co/cognitivecomputations/MegaDolphin-120b).
If you walk around Starbucks in Seoul, Korea, you will see someone creating a merge and an application based on it.
At that time, please come up to me and say "hello".
"Also, if you want to create the Application project you want and provide me with support, I will create the entire architecture for you whatever it is."
🏎️ I am a person whose goal is to turn the great results created by great genius scientists & groups around the world into profitable ones.
```
My role model is J. Robert Oppenheimer!!!
J. Robert Oppenheimer is highly regarded for his ability to gather and lead a team of brilliant scientists, merging their diverse expertise and efforts towards a common goal.
```
[Learn more about J. Robert Oppenheimer](https://en.wikipedia.org/wiki/J._Robert_Oppenheimer).
I hope this 120B is a helpful model for your future.
```
🌍 Collaboration is always welcome 🌍
👊 You can't beat these giant corporations & groups alone and you can never become rich.
Now we have to come together.
People who can actually become rich together, let's collaborate with me.!!! 🍸
```
```
About Bllossom/llama-3-Korean-Bllossom-70B
- Full model released in Korean over 100GB by Blossom team
- First in Korean! Expansion of Korean vocabulary to over 30,000 words
- Capable of processing Korean context that is approximately 25% longer than Llama3
- Connecting Korean-English knowledge using the Korean-English Parallel Corpus (pre-study)
- Fine tuning using data produced by linguists considering Korean culture and language
- Reinforcement learning
🛰️ About asiansoul/llama-3-Korean-Bllossom-120B-GGUF
- Just Do It
```
### Models Merged
The following models were included in the merge:
* [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [10, 30]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [20, 40]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [30, 50]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [40, 60]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [50, 70]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [60, 80]
model: Bllossom/llama-3-Korean-Bllossom-70B
merge_method: passthrough
dtype: float16
``` |
zcamz/bert-finetuned-toxic | zcamz | 2024-05-19T02:26:35Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-18T17:45:47Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-toxic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-toxic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3207
- F1: 0.7032
- Roc Auc: 0.9143
- Accuracy: 0.9069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 499 | 0.1740 | 0.5646 | 0.9544 | 0.8619 |
| 0.2962 | 2.0 | 998 | 0.1595 | 0.5994 | 0.9551 | 0.8691 |
| 0.1545 | 3.0 | 1497 | 0.1715 | 0.6322 | 0.9509 | 0.8776 |
| 0.1218 | 4.0 | 1996 | 0.1883 | 0.6412 | 0.9467 | 0.8870 |
| 0.0976 | 5.0 | 2495 | 0.2497 | 0.6808 | 0.9265 | 0.9037 |
| 0.0807 | 6.0 | 2994 | 0.2411 | 0.6742 | 0.9331 | 0.8917 |
| 0.0682 | 7.0 | 3493 | 0.2955 | 0.6922 | 0.9183 | 0.8995 |
| 0.0597 | 8.0 | 3992 | 0.3207 | 0.7032 | 0.9143 | 0.9069 |
| 0.0533 | 9.0 | 4491 | 0.3207 | 0.6977 | 0.9158 | 0.9044 |
| 0.0487 | 10.0 | 4990 | 0.3407 | 0.7028 | 0.9091 | 0.9073 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
MinhViet/bartpho-linear-test1 | MinhViet | 2024-05-19T02:24:12Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-19T02:23:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
langgptai/Qwen-sft-la-v0.1 | langgptai | 2024-05-19T02:23:48Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-4B-Chat",
"base_model:adapter:Qwen/Qwen1.5-4B-Chat",
"license:other",
"region:us"
] | null | 2024-05-19T01:30:55Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: Qwen/Qwen1.5-4B-Chat
model-index:
- name: full_alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full_alpaca
This model is a fine-tuned version of [Qwen/Qwen1.5-4B-Chat](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) on the LangGPT_community and the LangGPT_alpaca datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Zlovoblachko/en_stellar_butterfly | Zlovoblachko | 2024-05-19T02:21:32Z | 0 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"region:us"
] | token-classification | 2024-05-19T01:49:45Z | ---
tags:
- spacy
language:
- en
model-index:
- name: en_stellar_butterfly
results: []
pipeline_tag: token-classification
---
| Feature | Description |
| --- | --- |
| **Name** | `en_stellar_butterfly` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `transformer`, `spancat` |
| **Components** | `transformer`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `Tense semantics`, `Synonyms`, `Copying expression`, `Word form transmission`, `Transliteration` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 84.97 |
| `SPANS_SC_P` | 88.11 |
| `SPANS_SC_R` | 82.05 |
| `TRANSFORMER_LOSS` | 6185.29 |
| `SPANCAT_LOSS` | 139119.57 | |
Sorour/llama3_cls_fomc | Sorour | 2024-05-19T02:20:08Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:14:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bibek1129/nepali-poem-generator-distilgpt2 | Bibek1129 | 2024-05-19T02:09:20Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"ne",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:04:44Z | ---
license: apache-2.0
language: ne # <-- my language
widget:
- text: "मेरो मन"
---
Nepali poem generator finetuning on ditillpt2-nepali. |
fzzhang/mistralv1_lora_r16_25e5_e05_merged | fzzhang | 2024-05-19T01:44:25Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T01:41:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TinyPixel/dnb-o | TinyPixel | 2024-05-19T01:35:38Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T01:29:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedgongi/Llama_dev3tokenizer_finale4 | ahmedgongi | 2024-05-19T01:35:09Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T01:35:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/sentiment-lora-r4a1d0.05-1 | apwic | 2024-05-19T01:32:10Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-19T00:59:00Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r4a1d0.05-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r4a1d0.05-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3356
- Accuracy: 0.8622
- Precision: 0.8399
- Recall: 0.8200
- F1: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5657 | 1.0 | 122 | 0.5182 | 0.7243 | 0.6604 | 0.6424 | 0.6488 |
| 0.5109 | 2.0 | 244 | 0.5051 | 0.7243 | 0.6748 | 0.6874 | 0.6796 |
| 0.48 | 3.0 | 366 | 0.4643 | 0.7569 | 0.7047 | 0.6880 | 0.6948 |
| 0.434 | 4.0 | 488 | 0.4281 | 0.7920 | 0.7497 | 0.7378 | 0.7431 |
| 0.4106 | 5.0 | 610 | 0.4194 | 0.7920 | 0.7528 | 0.7778 | 0.7618 |
| 0.3812 | 6.0 | 732 | 0.3936 | 0.8296 | 0.8008 | 0.7744 | 0.7854 |
| 0.3689 | 7.0 | 854 | 0.3700 | 0.8521 | 0.8220 | 0.8204 | 0.8212 |
| 0.3489 | 8.0 | 976 | 0.3656 | 0.8346 | 0.8088 | 0.7780 | 0.7905 |
| 0.3502 | 9.0 | 1098 | 0.3640 | 0.8371 | 0.8101 | 0.7847 | 0.7955 |
| 0.3349 | 10.0 | 1220 | 0.3608 | 0.8346 | 0.8074 | 0.7805 | 0.7917 |
| 0.3189 | 11.0 | 1342 | 0.3574 | 0.8396 | 0.8128 | 0.7890 | 0.7992 |
| 0.3121 | 12.0 | 1464 | 0.3547 | 0.8471 | 0.8175 | 0.8093 | 0.8132 |
| 0.3181 | 13.0 | 1586 | 0.3478 | 0.8521 | 0.8332 | 0.7979 | 0.8122 |
| 0.3092 | 14.0 | 1708 | 0.3435 | 0.8596 | 0.8374 | 0.8157 | 0.8253 |
| 0.3018 | 15.0 | 1830 | 0.3466 | 0.8546 | 0.8296 | 0.8121 | 0.8200 |
| 0.2955 | 16.0 | 1952 | 0.3365 | 0.8596 | 0.8347 | 0.8207 | 0.8272 |
| 0.2917 | 17.0 | 2074 | 0.3353 | 0.8596 | 0.8374 | 0.8157 | 0.8253 |
| 0.2956 | 18.0 | 2196 | 0.3379 | 0.8596 | 0.8360 | 0.8182 | 0.8262 |
| 0.2899 | 19.0 | 2318 | 0.3353 | 0.8647 | 0.8455 | 0.8192 | 0.8306 |
| 0.2885 | 20.0 | 2440 | 0.3356 | 0.8622 | 0.8399 | 0.8200 | 0.8289 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
camilomj/MJInvincibleEraV3 | camilomj | 2024-05-19T01:29:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T01:28:01Z | ---
license: apache-2.0
---
|
RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-8bits | RichardErkhov | 2024-05-19T01:28:26Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-19T01:21:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-v0.2-meditron-turkish - bnb 8bits
- Model creator: https://huggingface.co/malhajar/
- Original model: https://huggingface.co/malhajar/Mistral-7B-v0.2-meditron-turkish/
Original model description:
---
language:
- tr
- en
license: apache-2.0
datasets:
- malhajar/meditron-tr
model-index:
- name: Mistral-7B-v0.2-meditron-turkish
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Mistral-7B-v0.2-meditron-turkish is a finetuned Mistral Model version using Freeze technique on Turkish Meditron dataset of [`malhajar/meditron-7b-tr`](https://huggingface.co/datasets/malhajar/meditron-tr) using SFT Training.
This model can answer information about different excplicit ideas in medicine in Turkish and English
### Model Description
- **Finetuned by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
- **Language(s) (NLP):** Turkish,English
- **Finetuned from model:** [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### Prompt Template For Turkish Generation
```
### Kullancı:
```
### Prompt Template For English Generation
```
### User:
```
## How to Get Started with the Model
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
model_id = "malhajar/Mistral-7B-v0.2-meditron-turkish"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
torch_dtype=torch.float16,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "Akciğer kanseri nedir?"
# For generating a response
prompt = '''
### Kullancı:
{question}
'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,
top_p=0.95)
response = tokenizer.decode(output[0])
print(response)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_malhajar__Mistral-7B-v0.2-meditron-turkish)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.34|
|AI2 Reasoning Challenge (25-Shot)|59.56|
|HellaSwag (10-Shot) |81.79|
|MMLU (5-Shot) |60.35|
|TruthfulQA (0-shot) |66.19|
|Winogrande (5-shot) |76.24|
|GSM8k (5-shot) |35.94|
|
TinyPixel/dnb-5 | TinyPixel | 2024-05-19T01:24:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T01:24:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blockblockblock/karasu-7B-chat-plus-bpw4-exl2 | blockblockblock | 2024-05-19T01:23:19Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ja",
"en",
"dataset:OpenAssistant/oasst1",
"dataset:zetavg/ShareGPT-Processed",
"dataset:augmxnt/ultra-orca-boros-en-ja-v1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-19T01:21:25Z | ---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- zetavg/ShareGPT-Processed
- augmxnt/ultra-orca-boros-en-ja-v1
language:
- ja
- en
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c8a2e01c25d2c581a381c1/9CbN4lDGU42c-7DmK_mGM.png" alt="drawing" width="600"/>
</p>
# Evaluation

# How to use
### Hugggingface
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("lightblue/karasu-7B-chat-plus")
model = AutoModelForCausalLM.from_pretrained("lightblue/karasu-7B-chat-plus", torch_dtype=torch.bfloat16, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)
```
### VLLM
```python
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/karasu-7B-chat-plus")
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Base checkpoint
[lightblue/karasu-7B](https://huggingface.co/lightblue/karasu-7B)
# Training datasets (total ~7B)
* Lightblue's suite of Kujira datasets (unreleased)
* Lightblue's own question-based datasets (unreleased)
* Lightblue's own category-based datasets (unreleased)
* [OASST](https://huggingface.co/datasets/OpenAssistant/oasst1) (Japanese chats only)
* [ShareGPT](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) (Japanese chats only)
* [augmxnt/ultra-orca-boros-en-ja-v1](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) (['airoboros', 'slimorca', 'ultrafeedback', 'airoboros_ja_new'] only)
# Developed by
<a href="https://www.lightblue-tech.com">
<img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/>
</a>
### Engineers
Peter Devine
Sho Higuchi
### Advisors
Yuuki Yamanaka
Atom Sonoda
### Project manager
Shunichi Taniguchi
### Dataset evaluator
Renju Aoki |
RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-4bits | RichardErkhov | 2024-05-19T01:21:18Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-19T01:17:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-v0.2-meditron-turkish - bnb 4bits
- Model creator: https://huggingface.co/malhajar/
- Original model: https://huggingface.co/malhajar/Mistral-7B-v0.2-meditron-turkish/
Original model description:
---
language:
- tr
- en
license: apache-2.0
datasets:
- malhajar/meditron-tr
model-index:
- name: Mistral-7B-v0.2-meditron-turkish
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Mistral-7B-v0.2-meditron-turkish is a finetuned Mistral Model version using Freeze technique on Turkish Meditron dataset of [`malhajar/meditron-7b-tr`](https://huggingface.co/datasets/malhajar/meditron-tr) using SFT Training.
This model can answer information about different excplicit ideas in medicine in Turkish and English
### Model Description
- **Finetuned by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
- **Language(s) (NLP):** Turkish,English
- **Finetuned from model:** [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### Prompt Template For Turkish Generation
```
### Kullancı:
```
### Prompt Template For English Generation
```
### User:
```
## How to Get Started with the Model
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
model_id = "malhajar/Mistral-7B-v0.2-meditron-turkish"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
torch_dtype=torch.float16,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "Akciğer kanseri nedir?"
# For generating a response
prompt = '''
### Kullancı:
{question}
'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,
top_p=0.95)
response = tokenizer.decode(output[0])
print(response)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_malhajar__Mistral-7B-v0.2-meditron-turkish)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.34|
|AI2 Reasoning Challenge (25-Shot)|59.56|
|HellaSwag (10-Shot) |81.79|
|MMLU (5-Shot) |60.35|
|TruthfulQA (0-shot) |66.19|
|Winogrande (5-shot) |76.24|
|GSM8k (5-shot) |35.94|
|
RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf | RichardErkhov | 2024-05-19T01:12:49Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T23:07:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralMarcoro14-7B - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/NeuralMarcoro14-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [NeuralMarcoro14-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [NeuralMarcoro14-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [NeuralMarcoro14-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [NeuralMarcoro14-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [NeuralMarcoro14-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [NeuralMarcoro14-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [NeuralMarcoro14-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [NeuralMarcoro14-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [NeuralMarcoro14-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [NeuralMarcoro14-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [NeuralMarcoro14-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [NeuralMarcoro14-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [NeuralMarcoro14-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [NeuralMarcoro14-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [NeuralMarcoro14-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [NeuralMarcoro14-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [NeuralMarcoro14-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [NeuralMarcoro14-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [NeuralMarcoro14-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [NeuralMarcoro14-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [NeuralMarcoro14-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [NeuralMarcoro14-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- mlabonne/Marcoro14-7B-slerp
- dpo
- rlhf
- merge
- mergekit
- lazymergekit
datasets:
- mlabonne/chatml_dpo_pairs
base_model: mlabonne/Marcoro14-7B-slerp
model-index:
- name: NeuralMarcoro14-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
---

# NeuralMarcoro14-7B
This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset.
It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark.
It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24).
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M).
## ⚡ Quantized models
* **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF
## 🏆 Evaluation
### Open LLM Leaderboard


### Nous
| Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
|-------------------------|------:|------:|---------:|-------:|------:|
|[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
|Change | -0.07| -0.07| +1.79| +1.26| +0.73|
## 🧩 Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralMarcoro14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
EpicJhon/l3-4 | EpicJhon | 2024-05-19T01:01:42Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-03T09:10:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dexter-chan/distilbert-base-uncased-yelp | dexter-chan | 2024-05-19T00:50:08Z | 199 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T00:45:17Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.2
|
DownwardSpiral33/gpt2-imdb-pos-v2-roberta128-0_1 | DownwardSpiral33 | 2024-05-19T00:45:48Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T00:45:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AndyNodi/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo | AndyNodi | 2024-05-19T00:42:24Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T00:38:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AndyNodi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stablediffusionapi/divineelegancemix-v10 | stablediffusionapi | 2024-05-19T00:41:23Z | 29 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-19T00:39:13Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# DivineEleganceMix v10 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "divineelegancemix-v10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/divineelegancemix-v10)
Model link: [View model](https://modelslab.com/models/divineelegancemix-v10)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "divineelegancemix-v10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
jaandoui/DNABERT2-AttentionExtracted | jaandoui | 2024-05-19T00:29:20Z | 1,530 | 3 | transformers | [
"transformers",
"pytorch",
"biology",
"medical",
"custom_code",
"arxiv:2306.15006",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T09:22:40Z | ---
metrics:
- matthews_correlation
- f1
tags:
- biology
- medical
---
This version of DNABERT2 has been changed to be able to output the attention too, for attention analysis.
**To the author of DNABERT2, feel free to use those modifications.**
Use ```--model_name_or_path jaandoui/DNABERT2-AttentionExtracted``` instead of the original repository to have access to the attention.
Most of the modifications were done in Bert_Layer.py.
It has been modified especially for fine tuning and hasn't been tried for pretraining.
Before or next to each modification, you can find ```"JAANDOUI"``` so to see al modifications, search for ```"JAANDOUI"```.
```"JAANDOUI TODO"``` means that if that part is going to be used, maybe something might be missing.
Now in ```Trainer``` (or ```CustomTrainer``` if overwritten) in ```compute_loss(..)``` when defining the model:
```outputs = model(**inputs, return_dict=True, output_attentions=True)```
activate the extraction of attention: ```output_attentions=True``` (and ```return_dict=True``` (optional)).
You can now extract the attention in ```outputs.attentions```
Note than the output has a third dimension, mostly of value 12, referring to the layer ```outputs.attentions[-1]``` refers to the attention of the last layer.
Read more about model outputs here: https://huggingface.co/docs/transformers/v4.40.2/en/main_classes/output#transformers.utils.ModelOutput
I'm also not using Triton, therefore cannot guarantee that it will work with it.
I also read that there were some problems with extracting attention when using Flash Attention here: https://github.com/huggingface/transformers/issues/28903
Not sure if that is relevant for us, since it's about Mistral models.
I'm still exploring this attention, please don't take it as if it works 100%. I'll update the repository when I'm sure.
The official link to DNABERT2 [DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome
](https://arxiv.org/pdf/2306.15006.pdf).
READ ME OF THE OFFICIAL DNABERT2:
We sincerely appreciate the MosaicML team for the [MosaicBERT](https://openreview.net/forum?id=5zipcfLC2Z) implementation, which serves as the base of DNABERT-2 development.
DNABERT-2 is a transformer-based genome foundation model trained on multi-species genome.
To load the model from huggingface:
```
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("zhihan1996/DNABERT-2-117M", trust_remote_code=True)
model = AutoModel.from_pretrained("zhihan1996/DNABERT-2-117M", trust_remote_code=True)
```
To calculate the embedding of a dna sequence
```
dna = "ACGTAGCATCGGATCTATCTATCGACACTTGGTTATCGATCTACGAGCATCTCGTTAGC"
inputs = tokenizer(dna, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 768]
# embedding with mean pooling
embedding_mean = torch.mean(hidden_states[0], dim=0)
print(embedding_mean.shape) # expect to be 768
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 768
``` |
apwic/sentiment-lora-r4a0d0.1-1 | apwic | 2024-05-19T00:25:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T23:52:00Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r4a0d0.1-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r4a0d0.1-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.8622
- Precision: 0.8373
- Recall: 0.8250
- F1: 0.8307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5658 | 1.0 | 122 | 0.5195 | 0.7268 | 0.6646 | 0.6492 | 0.6550 |
| 0.5125 | 2.0 | 244 | 0.5060 | 0.7293 | 0.6805 | 0.6935 | 0.6855 |
| 0.4809 | 3.0 | 366 | 0.4686 | 0.7669 | 0.7184 | 0.7151 | 0.7167 |
| 0.4353 | 4.0 | 488 | 0.4295 | 0.7920 | 0.7500 | 0.7353 | 0.7417 |
| 0.4116 | 5.0 | 610 | 0.4171 | 0.8020 | 0.7628 | 0.7849 | 0.7714 |
| 0.3809 | 6.0 | 732 | 0.3865 | 0.8446 | 0.8148 | 0.8051 | 0.8096 |
| 0.3681 | 7.0 | 854 | 0.3697 | 0.8496 | 0.8193 | 0.8161 | 0.8177 |
| 0.3469 | 8.0 | 976 | 0.3554 | 0.8471 | 0.8206 | 0.8018 | 0.8102 |
| 0.3455 | 9.0 | 1098 | 0.3494 | 0.8496 | 0.8211 | 0.8111 | 0.8158 |
| 0.3284 | 10.0 | 1220 | 0.3437 | 0.8496 | 0.8289 | 0.7961 | 0.8096 |
| 0.3132 | 11.0 | 1342 | 0.3371 | 0.8596 | 0.8389 | 0.8132 | 0.8243 |
| 0.3042 | 12.0 | 1464 | 0.3371 | 0.8546 | 0.8254 | 0.8221 | 0.8238 |
| 0.3063 | 13.0 | 1586 | 0.3317 | 0.8596 | 0.8406 | 0.8107 | 0.8233 |
| 0.3013 | 14.0 | 1708 | 0.3304 | 0.8622 | 0.8373 | 0.8250 | 0.8307 |
| 0.2928 | 15.0 | 1830 | 0.3295 | 0.8596 | 0.8325 | 0.8257 | 0.8290 |
| 0.2864 | 16.0 | 1952 | 0.3284 | 0.8622 | 0.8351 | 0.8300 | 0.8325 |
| 0.2819 | 17.0 | 2074 | 0.3254 | 0.8596 | 0.8347 | 0.8207 | 0.8272 |
| 0.2877 | 18.0 | 2196 | 0.3249 | 0.8596 | 0.8336 | 0.8232 | 0.8281 |
| 0.2819 | 19.0 | 2318 | 0.3241 | 0.8647 | 0.8410 | 0.8267 | 0.8333 |
| 0.2803 | 20.0 | 2440 | 0.3239 | 0.8622 | 0.8373 | 0.8250 | 0.8307 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
stablediffusionapi/icbinp | stablediffusionapi | 2024-05-19T00:24:38Z | 29 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-19T00:22:41Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ICBINP API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "icbinp"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/icbinp)
Model link: [View model](https://modelslab.com/models/icbinp)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "icbinp",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
VinyVan/modelGGUF | VinyVan | 2024-05-19T00:22:29Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T00:19:50Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** VinyVan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stablediffusionapi/cuteyukimix-kemiaomiao | stablediffusionapi | 2024-05-19T00:19:49Z | 30 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-19T00:17:42Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# CuteYukiMix kemiaomiao API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "cuteyukimix-kemiaomiao"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/cuteyukimix-kemiaomiao)
Model link: [View model](https://modelslab.com/models/cuteyukimix-kemiaomiao)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "cuteyukimix-kemiaomiao",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Zhengping/roberta-large-unli | Zhengping | 2024-05-19T00:18:25Z | 331 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:Zhengping/UNLI",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-29T00:28:49Z | ---
datasets:
- Zhengping/UNLI
language:
- en
pipeline_tag: text-classification
---
UNLI model fine-tuned from `ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli`, using UNLI. If you find this model useful, please cite the paper:
```
@inproceedings{chen-etal-2020-uncertain,
title = "Uncertain Natural Language Inference",
author = "Chen, Tongfei and
Jiang, Zhengping and
Poliak, Adam and
Sakaguchi, Keisuke and
Van Durme, Benjamin",
editor = "Jurafsky, Dan and
Chai, Joyce and
Schluter, Natalie and
Tetreault, Joel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.774",
doi = "10.18653/v1/2020.acl-main.774",
pages = "8772--8779",
abstract = "We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments. We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise. We describe a direct scalar regression modeling approach, and find that existing categorically-labeled NLI data can be used in pre-training. Our best models correlate well with humans, demonstrating models are capable of more subtle inferences than the categorical bin assignment employed in current NLI tasks.",
}
``` |
abc88767/8sc100 | abc88767 | 2024-05-19T00:03:04Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T05:04:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
uisikdag/finetunedsam | uisikdag | 2024-05-18T23:55:30Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"sam",
"mask-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | mask-generation | 2024-05-18T23:52:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
irfanfadhullah/winagent-8b-Instruct-bnb-16bit | irfanfadhullah | 2024-05-18T23:54:43Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T23:31:07Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** irfanfadhullah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ViraIntelligentDataMining/AriaBERT | ViraIntelligentDataMining | 2024-05-18T23:52:38Z | 162 | 5 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"bert",
"persian",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-27T18:33:39Z | ---
license: apache-2.0
language:
- fa
tags:
- bert
- roberta
- persian
---
# AriaBERT: A Pre-trained Persian BERT Model for Natural Language Understanding
## Introduction
AriaBERT represents a breakthrough in natural language processing (NLP) for the Persian language. Developed to address the critical gap in efficient pretrained language models for Persian, AriaBERT is tailored to elevate the standards of Persian language tasks.
## Paper: https://www.researchsquare.com/article/rs-3558473/v1
## Key Features
- **Diverse Training Data:** AriaBERT has been trained on over 32 gigabytes of varied Persian textual data, spanning conversational, formal, and hybrid texts. This includes a rich mix of tweets, news articles, poems, medical and encyclopedia texts, user opinions, and more.
- **RoBERTa Architecture:** Leveraging the robustness of the RoBERTa architecture and the precision of Byte-Pair Encoding tokenizer, AriaBERT stands apart from traditional BERT-based models.
- **Broad Applicability:** Ideal for a range of NLP tasks including classification, sentiment analysis, and stance detection, particularly in the Persian language context.
## Performance Benchmarks
- **Sentiment Analysis:** Achieves an average improvement of 3% over competing models.
- **Classification Tasks:** Demonstrates a 0.65% improvement in accuracy.
- **Stance Detection:** Shows a 3% enhancement in performance metrics.
|
flammenai/Mahou-1.2a-mistral-7B | flammenai | 2024-05-18T23:52:31Z | 43 | 6 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/Grill-preprod-v1_chatML",
"dataset:flammenai/Grill-preprod-v2_chatML",
"base_model:flammenai/flammen27-mistral-7B",
"base_model:quantized:flammenai/flammen27-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T15:55:55Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen27-mistral-7B
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/Grill-preprod-v1_chatML
- flammenai/Grill-preprod-v2_chatML
---

# Mahou-1.2a-mistral-7B
Mahou is our attempt to build a production-ready conversational/roleplay LLM.
Future versions will be released iteratively and finetuned from flammen.ai conversational data.
1.2a is rebased and retrained to improve comphresion and coherence.
### Chat Format
This model has been trained to use ChatML format.
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
# Roleplay Format
- Speech without quotes.
- Actions in `*asterisks*`
```
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
```
### ST Settings
1. Use ChatML for the Context Template.
2. Turn on Instruct Mode for ChatML.
3. Use the following stopping strings: `["<", "|", "<|", "\n"]`
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=2000,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
mrm8488/llama-3-8b-ft-en-es-rag-4bit-merged | mrm8488 | 2024-05-18T23:52:29Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T23:48:36Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mrm8488
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits