modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jeonsiyun/layoutlmv3-financial-document-classification4 | jeonsiyun | 2024-03-06T04:44:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T04:44:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF | seyf1elislam | 2024-03-06T04:33:26Z | 0 | 0 | null | [
"gguf",
"GGUF",
"base_model:seyf1elislam/WestKunai-Hermes-long-128k-test-7b",
"base_model:quantized:seyf1elislam/WestKunai-Hermes-long-128k-test-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T02:14:44Z | ---
tags:
- GGUF
base_model:
- seyf1elislam/WestKunai-Hermes-long-128k-test-7b
---
# WestKunai-Hermes-long-128k-test-7b
- Model creator: [seyf1elislam](https://huggingface.co/seyf1elislam)
- Original model: [WestKunai-Hermes-long-128k-test-7b](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [seyf1elislam's WestKunai-Hermes-long-128k-test-7b ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [westkunai-hermes-long-128k-test-7b.Q2_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [westkunai-hermes-long-128k-test-7b.Q3_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [WestKunai-Hermes-long-128k-test-7b.Q4_K_S.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/WestKunai-Hermes-long-128k-test-7b.Q4_K_S.gguf ) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [westkunai-hermes-long-128k-test-7b.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [westkunai-hermes-long-128k-test-7b.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [westkunai-hermes-long-128k-test-7b.Q6_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [westkunai-hermes-long-128k-test-7b.Q8_0.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | |
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-clean-09 | alinerodrigues | 2024-03-06T04:32:39Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-06T00:17:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-clean-09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-clean-09
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1091
- Wer: 0.0715
- Cer: 0.0201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 28.1893 | 1.0 | 67 | 5.5870 | 0.9891 | 0.9135 |
| 8.4096 | 2.0 | 134 | 3.3389 | 0.9888 | 0.9614 |
| 4.6285 | 3.0 | 201 | 3.5010 | 0.9713 | 0.9829 |
| 4.6285 | 4.0 | 268 | 3.1012 | 0.9746 | 0.9824 |
| 4.1632 | 5.0 | 335 | 3.0531 | 0.9766 | 0.9801 |
| 3.6744 | 6.0 | 402 | 3.0343 | 0.9868 | 0.9739 |
| 3.6744 | 7.0 | 469 | 2.8810 | 1.0 | 1.0 |
| 3.0111 | 8.0 | 536 | 2.4821 | 0.9970 | 0.9612 |
| 2.0541 | 9.0 | 603 | 0.4203 | 0.5659 | 0.1206 |
| 2.0541 | 10.0 | 670 | 0.1569 | 0.1107 | 0.0288 |
| 0.4608 | 11.0 | 737 | 0.1331 | 0.0975 | 0.0263 |
| 0.2892 | 12.0 | 804 | 0.1344 | 0.0955 | 0.0254 |
| 0.2892 | 13.0 | 871 | 0.1242 | 0.0797 | 0.0226 |
| 0.2182 | 14.0 | 938 | 0.1217 | 0.0837 | 0.0240 |
| 0.2017 | 15.0 | 1005 | 0.1147 | 0.0728 | 0.0208 |
| 0.2017 | 16.0 | 1072 | 0.1206 | 0.0725 | 0.0216 |
| 0.1666 | 17.0 | 1139 | 0.1155 | 0.0744 | 0.0215 |
| 0.169 | 18.0 | 1206 | 0.1175 | 0.0744 | 0.0213 |
| 0.169 | 19.0 | 1273 | 0.1187 | 0.0787 | 0.0218 |
| 0.1678 | 20.0 | 1340 | 0.1211 | 0.0744 | 0.0216 |
| 0.148 | 21.0 | 1407 | 0.1153 | 0.0715 | 0.0205 |
| 0.148 | 22.0 | 1474 | 0.1164 | 0.0728 | 0.0213 |
| 0.1487 | 23.0 | 1541 | 0.1091 | 0.0715 | 0.0201 |
| 0.138 | 24.0 | 1608 | 0.1204 | 0.0705 | 0.0202 |
| 0.138 | 25.0 | 1675 | 0.1114 | 0.0698 | 0.0201 |
| 0.1251 | 26.0 | 1742 | 0.1180 | 0.0688 | 0.0202 |
| 0.1056 | 27.0 | 1809 | 0.1188 | 0.0675 | 0.0199 |
| 0.1056 | 28.0 | 1876 | 0.1123 | 0.0652 | 0.0188 |
| 0.1107 | 29.0 | 1943 | 0.1226 | 0.0728 | 0.0215 |
| 0.0972 | 30.0 | 2010 | 0.1221 | 0.0705 | 0.0205 |
| 0.0972 | 31.0 | 2077 | 0.1226 | 0.0702 | 0.0207 |
| 0.1032 | 32.0 | 2144 | 0.1159 | 0.0669 | 0.0195 |
| 0.1038 | 33.0 | 2211 | 0.1205 | 0.0711 | 0.0204 |
| 0.1038 | 34.0 | 2278 | 0.1191 | 0.0685 | 0.0192 |
| 0.1027 | 35.0 | 2345 | 0.1170 | 0.0688 | 0.0198 |
| 0.1 | 36.0 | 2412 | 0.1189 | 0.0659 | 0.0198 |
| 0.1 | 37.0 | 2479 | 0.1102 | 0.0649 | 0.0187 |
| 0.0989 | 38.0 | 2546 | 0.1150 | 0.0718 | 0.0206 |
| 0.089 | 39.0 | 2613 | 0.1202 | 0.0682 | 0.0195 |
| 0.089 | 40.0 | 2680 | 0.1168 | 0.0669 | 0.0194 |
| 0.0807 | 41.0 | 2747 | 0.1161 | 0.0669 | 0.0190 |
| 0.0812 | 42.0 | 2814 | 0.1208 | 0.0715 | 0.0204 |
| 0.0812 | 43.0 | 2881 | 0.1260 | 0.0629 | 0.0193 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
bhaswata08/Llama-2-7b-chat-hf-function-calling-v3-AWQ | bhaswata08 | 2024-03-06T04:26:51Z | 48 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-03-05T09:45:58Z | ---
license: llama2
---
# Model Card for bhaswata08/Llama-2-7b-chat-hf-function-calling-v3-AWQ
Model creator: Trelis
Original model: Llama-2-7b-chat-hf-function-calling-v3
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rachittshah/gemma-2b-Gujpaca | rachittshah | 2024-03-06T04:24:53Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T04:21:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DFJordan/binary-image-classifier | DFJordan | 2024-03-06T04:23:02Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-06T04:03:43Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: binary-image-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-image-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1302 | 1.0 | 67 | 0.1486 |
| 0.0503 | 2.0 | 134 | 0.1087 |
| 0.0188 | 3.0 | 201 | 0.1511 |
| 0.0116 | 4.0 | 268 | 0.1225 |
| 0.0088 | 5.0 | 335 | 0.1222 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lcfrs/gemma-2b-it | lcfrs | 2024-03-06T04:18:41Z | 0 | 0 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T03:55:44Z | ---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
Created by running `quantize` from `llama.cpp` on [gemma-2b-it.gguf](https://huggingface.co/google/gemma-2b-it-GGUF/blob/main/).
```sh
llama.cpp $ ./quantize models/gemma-2b-it.gguf models/gemma-2b-it-Q4_K_M.gguf Q4_K_M
main: build = 2351 (652ca2bd)
main: built with Android (10552028, +pgo, +bolt, +lto, -mlgo, based on r487747d) clang version 17.0.2 (https://android.googlesource.com/toolchain/llvm-project d9f89f4d16663d5012e5c09495f3b30ece3d2362) for x86_64-apple-darwin22.5.0
main: quantizing 'models/gemma-2b-it.gguf' to 'models/gemma-2b-it-Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 19 key-value pairs and 164 tensors from models/gemma-2b-it.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-2b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.block_count u32 = 18
llama_model_loader: - kv 4: gemma.embedding_length u32 = 2048
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 8
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 1
llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - type f32: 164 tensors
llama_model_quantize_internal: meta size = 6042528 bytes
[..snip..]
llama_model_quantize_internal: model size = 9561.29 MB
llama_model_quantize_internal: quant size = 1549.19 MB
main: quantize time = 27285.22 ms
main: total time = 27285.22 ms
``` |
uname-n/tiny-aquatic-llama.10k | uname-n | 2024-03-06T04:16:13Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:uname-n/slim-orca-dedup-chat-10k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T02:30:10Z | ---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca-Dedup
- uname-n/slim-orca-dedup-chat-10k
widget:
- text: "<|system|>\nYou are a chatbot who can help code!</s>\n<|user|>\nWrite me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>\n<|assistant|>\n"
---
<div align="center">
# Tiny Aquatic Llama
</div>
#### This Model
This is a chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). The model was fine-tuned on a 10k sample from [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup).
#### Note
This model is deranged.
|
shg1421/t5_astrology_peft | shg1421 | 2024-03-06T04:01:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T01:20:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sumail/Bubble_bee04_2b | Sumail | 2024-03-06T03:55:03Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:tomaszki/gemma-28",
"base_model:merge:tomaszki/gemma-28",
"base_model:tomaszki/gemma-28-copy",
"base_model:merge:tomaszki/gemma-28-copy",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T03:52:00Z | ---
base_model:
- tomaszki/gemma-28-copy
- tomaszki/gemma-28
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tomaszki/gemma-28-copy](https://huggingface.co/tomaszki/gemma-28-copy)
* [tomaszki/gemma-28](https://huggingface.co/tomaszki/gemma-28)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: tomaszki/gemma-28
layer_range: [0, 18]
- model: tomaszki/gemma-28-copy
layer_range: [0, 18]
merge_method: slerp
base_model: tomaszki/gemma-28
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
OwOOwO/eacc_dc_4 | OwOOwO | 2024-03-06T03:48:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T03:46:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dyang415/mixtral-fc-w-resp-new-format-4e-no-negative | dyang415 | 2024-03-06T03:47:37Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-05T05:03:16Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: mixtral-fc-w-resp-new-format-4e-no-negative
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: LlamaTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: inst
datasets:
- path: ./data/with_function_response/function_not_used_training.jsonl
type: sharegpt
conversation: mistral
# - path: ./data/with_function_response/no_function_training.jsonl
# type: sharegpt
# conversation: mistral
- path: ./data/with_function_response/function_used_training.jsonl
type: sharegpt
conversation: mistral
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ../mixtral-fc-w-resp-new-format-4e-no-negative
model_config:
output_router_logits: true
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
wandb_project: function-call
wandb_name: mixtral-instruct-lora-no-negative
wandb_log_model: end
hub_model_id: dyang415/mixtral-fc-w-resp-new-format-4e-no-negative
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
weight_decay: 0.0
fsdp:
fsdp_config:
```
</details><br>
# mixtral-fc-w-resp-new-format-4e-no-negative
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0
- Pytorch 2.0.1+cu117
- Datasets 2.17.1
- Tokenizers 0.15.0 |
EarthnDusk/Dataset_Dumps_Zips | EarthnDusk | 2024-03-06T03:44:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T00:39:51Z | ---
license: creativeml-openrail-m
---
Datasets Zipped.
Please note these are for educational purposes and research purposes only.
You are liable for any legal use of these zip files.
Some of these are already made concepts, but may fit in concepts mixed with other datasets.
Please do not use thse for illegal use, and if any of it is indeed E&D property - eg: Duskfall art, anything we've made in Second Life - don't claim the data as your own.
We feel confident in sharing the datasets, and you will be clearly the trainer of your LORA or Full model, but you won't own the data that you use from this repository.
Occasionally this repo may be privated, if you've gotten the link from us before - and want to access it - request access to be on the team.
Join our Reddit: https://www.reddit.com/r/earthndusk/
WE ARE PROUDLY SPONSORED BY: https://www.piratediffusion.com/
Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew
[](https://ko-fi.com/Z8Z8L4EO)
|
Commandante/german-party-sentiment-bert | Commandante | 2024-03-06T03:43:59Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:oliverguhr/german-sentiment-bert",
"base_model:finetune:oliverguhr/german-sentiment-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T09:34:10Z | ---
license: mit
base_model: oliverguhr/german-sentiment-bert
tags:
- generated_from_trainer
model-index:
- name: german-party-sentiment-bert-complete-gsbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# German-Party-Sentiment-Bert
This model is a fine-tuned version of [oliverguhr/german-sentiment-bert](https://huggingface.co/oliverguhr/german-sentiment-bert) on a dataset consisting of mentions of german political parties.
It achieves the following results on the evaluation set:
- Loss: 0.8912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 120
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2844 | 1.0 | 65 | 0.9382 |
| 0.9704 | 2.0 | 130 | 0.8912 |
| 0.7394 | 3.0 | 195 | 1.0455 |
| 0.5401 | 4.0 | 260 | 1.2711 |
| 0.4274 | 5.0 | 325 | 1.3578 |
| 0.2289 | 6.0 | 390 | 1.6143 |
| 0.1949 | 7.0 | 455 | 1.8376 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Tokenizers 0.15.1
|
OwOOwO/eacc_dc3 | OwOOwO | 2024-03-06T03:43:30Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T03:23:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seyf1elislam/WestKunai-XS-7b-GGUF | seyf1elislam | 2024-03-06T03:43:19Z | 22 | 0 | null | [
"gguf",
"GGUF",
"base_model:seyf1elislam/WestKunai-XS-7b",
"base_model:quantized:seyf1elislam/WestKunai-XS-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T01:41:02Z | ---
tags:
- GGUF
base_model:
- seyf1elislam/WestKunai-X-7b
---
# WestKunai-X-7b
- Model creator: [seyf1elislam](https://huggingface.co/seyf1elislam)
- Original model: [WestKunai-X-7b](https://huggingface.co/seyf1elislam/WestKunai-X-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [seyf1elislam's WestKunai-X-7b ](https://huggingface.co/seyf1elislam/WestKunai-X-7b).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [westkunai-x-7b.Q2_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [westkunai-x-7b.Q3_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [WestKunai-X-7b.Q4_K_S.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/WestKunai-X-7b.Q4_K_S.gguf ) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [westkunai-x-7b.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [westkunai-x-7b.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [westkunai-x-7b.Q6_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [westkunai-x-7b.Q8_0.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | |
nitky/Superswallow-70b-NVE | nitky | 2024-03-06T03:42:07Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"en",
"ja",
"arxiv:2311.10702",
"arxiv:2203.05482",
"base_model:allenai/tulu-2-dpo-70b",
"base_model:merge:allenai/tulu-2-dpo-70b",
"base_model:tokyotech-llm/Swallow-70b-instruct-hf",
"base_model:merge:tokyotech-llm/Swallow-70b-instruct-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T02:45:45Z | ---
base_model:
- tokyotech-llm/Swallow-70b-instruct-hf
- allenai/tulu-2-dpo-70b
tags:
- mergekit
- merge
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama
---
# Superswallow-70b-NVE
**Important Notice:**
This model partially utilizes the parameters of Tulu V2 DPO finetuned based on Llama 2, so it may inherit the AI2 ImpACT license. Please use the model keeping in mind that there may be changes regarding the license if AI2 contacts me.
The [AI2 ImpACT license](https://allenai.org/impact-license) includes information about data artifacts and model artifacts, but does not cover the case of directly applying parts of the LLM parameters of a model artifact to other models. However, I respect their research and great work, so I will change the license immediately if AI2 contacts me.
## Description
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model was created by injecting the ability to follow user intent from [Tulu 2 DPO](https://arxiv.org/abs/2311.10702) into the [Swallow](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) instract model.
It was a proof of concept for merging LLMs trained in other languages, and paid close attention to preserving the linguistic capabilities of the merge-based model.
As far as I know, Swallow is the full set Llama 2 model(7B, 13B, 70B) that can output the most beautiful Japanese. Therefore, I used it as the base model for merging this time. Thank you for their wonderful work.
## Test environment
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `simple-1` and `Null preset` for Generation.
### Recommendation
Use `simple-1` settings:
- temperature: 0.7
- top_p: 0.9
- repetition_penalty: 1.15
- top_k: 20
### Tested `temperature` Range
- temperature: 0.3 - 1.0
It works fine in most cases, but depending on the prompt, the output may become unstable at the temperature around 1.0.
**If the output does not follow the user intent, please lower the temperature to 0.5 or less.**
### Tested `repetition_penalty` Range
- repetition_penalty: 1.0 - 1.15
It works fine in most cases, but depending on the prompt, the output may become unstable at the repetition_penalty around 1.0.
## Prompt template
All prompt templates are available as well.
### Tulu Style
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
### Swallow Style (Alpaca format)
```
以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。リクエストを適切に完了するための回答を記述してください。
### 指示:
{instruction}
### 応答:
```
## Use the instruct model
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "nitky/Superswallow-70b-NVE"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto", load_in_4bit = True)
PROMPT_DICT = {
"prompt_input": (
"以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:"
),
"prompt_no_input": (
"以下に、あるタスクを説明する指示があります。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 応答:"
),
}
def create_prompt(instruction, input=None):
"""
Generates a prompt based on the given instruction and an optional input.
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
If no input is provided, it uses the 'prompt_no_input' template.
Args:
instruction (str): The instruction describing the task.
input (str, optional): Additional input providing context for the task. Default is None.
Returns:
str: The generated prompt.
"""
if input:
# Use the 'prompt_input' template when additional input is provided
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
else:
# Use the 'prompt_no_input' template when no additional input is provided
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
# Example usage
instruction_example = "以下のトピックに関する詳細な情報を提供してください。"
input_example = "東京工業大学の主なキャンパスについて教えてください"
prompt = create_prompt(instruction_example, input_example)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=200,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.15,
top_k=20,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [tokyotech-llm/Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
* [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b)
### Configuration
The command example:
```bash
# please change the path and options according to your environment
mergekit-mega --cuda Superswallow-70b-NVE.yml ~/text-generation-webui/models
```
The following YAML configuration was used to produce this model:
```yaml
models:
- model: tokyotech-llm/Swallow-70b-NVE-instruct-hf
parameters:
weight: 1.0
- model: allenai/tulu-2-dpo-70b
parameters:
weight: 1.0
merge_method: linear
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-70b-NVE-instruct-hf
name: Superswallow-70b-NVE
```
|
DingosGotMyBaby/uhn-twitch-chat | DingosGotMyBaby | 2024-03-06T03:34:11Z | 100 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-23T15:59:10Z | ---
license: mit
---
# A model based on UberHaxorNova's Twitch chat
Trained on over 700 vods worth of chat and with some scuffed filtering it became a 300mb dataset.
## Dataset
The dataset was created by downloading all the available vods at the time of creation as a json file and stripping out all the chat messages into a simple line-by-line text file.
## Training
This was trained using [aitextgen](https://github.com/minimaxir/aitextgen), created by [Max Woolf](https://github.com/minimaxir), using the example notebook found [here](https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing). Using GPT-2's 124M model as the base, it was trained for 3000 steps and produces an output scuffed enough to look like a real Twitch chat user.
## Use
This was created as a fun little project for the discord server and as such, should only be used for fun and not to harm people. This model must also follow the ethics guide of the tool that created it https://github.com/minimaxir/aitextgen/blob/master/docs/ethics.md
|
Ponce-01/DFEP-03 | Ponce-01 | 2024-03-06T03:32:57Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T03:23:43Z | ---
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
license: other
library_name: adapter-transformers
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
agnedil/Mistral-7B-openassistant-guanaco-v2 | agnedil | 2024-03-06T03:20:17Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-03-05T08:55:35Z | Model [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) fine-tuned on the [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset using the following [Colab notebook](https://colab.research.google.com/drive/19lYWzMvZAc2cWPojRiPnYIR5Ok62CgFQ?usp=drive_link). |
julienkay/stable-diffusion-2-1 | julienkay | 2024-03-06T03:17:54Z | 0 | 0 | null | [
"onnx",
"text-to-image",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-15T20:47:45Z | ---
license: openrail++
pipeline_tag: text-to-image
---
The official [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) model converted to ONNX for usage with Unity Sentis.
See [com.doji.diffusers](https://github.com/julienkay/com.doji.diffusers) for details.
|
ho1iday/pokemon-lora | ho1iday | 2024-03-06T03:14:20Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-05T12:34:41Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - ho1iday/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Deepnoid/OPEN-SOLAR-KO-10.7B-v13 | Deepnoid | 2024-03-06T03:06:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"base_model:adapter:beomi/OPEN-SOLAR-KO-10.7B",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T02:11:12Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: beomi/OPEN-SOLAR-KO-10.7B
model-index:
- name: data/Models/OPEN-SOLAR-KO-10.7B-v13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# data/Models/OPEN-SOLAR-KO-10.7B-v13
This model is a fine-tuned version of [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Joyqiuyue/JoyFineTune | Joyqiuyue | 2024-03-06T02:54:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T02:30:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mirfan899/kids_phoneme_sm_model | mirfan899 | 2024-03-06T02:54:21Z | 41 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mirfan899/kids_phoneme_sm",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-10T11:56:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mirfan899/kids_phoneme_sm
base_model: facebook/wav2vec2-large-xlsr-53
model-index:
- name: kids_phoneme_sm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kids_phoneme_sm_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the https://huggingface.co/datasets/mirfan899/kids_phoneme_sm dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5405
- Cer: 0.2770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.2595 | 0.74 | 500 | 3.7094 | 1.0 |
| 2.8393 | 1.48 | 1000 | 3.2563 | 1.0 |
| 2.7916 | 2.22 | 1500 | 3.0450 | 1.0 |
| 1.9585 | 2.96 | 2000 | 1.0280 | 0.8428 |
| 1.0099 | 3.7 | 2500 | 0.6477 | 0.5162 |
| 0.7968 | 4.44 | 3000 | 0.5551 | 0.4592 |
| 0.6977 | 5.19 | 3500 | 0.5107 | 0.4065 |
| 0.609 | 5.93 | 4000 | 0.4763 | 0.3916 |
| 0.5941 | 6.67 | 4500 | 0.4817 | 0.3850 |
| 0.5411 | 7.41 | 5000 | 0.4755 | 0.3639 |
| 0.5021 | 8.15 | 5500 | 0.4649 | 0.3622 |
| 0.4884 | 8.89 | 6000 | 0.4630 | 0.3569 |
| 0.4484 | 9.63 | 6500 | 0.4675 | 0.3420 |
| 0.4432 | 10.37 | 7000 | 0.4192 | 0.3402 |
| 0.399 | 11.11 | 7500 | 0.4508 | 0.3310 |
| 0.4215 | 11.85 | 8000 | 0.4406 | 0.3345 |
| 0.366 | 12.59 | 8500 | 0.4620 | 0.3248 |
| 0.3708 | 13.33 | 9000 | 0.4594 | 0.3327 |
| 0.3352 | 14.07 | 9500 | 0.4649 | 0.3121 |
| 0.3468 | 14.81 | 10000 | 0.4413 | 0.3020 |
| 0.3283 | 15.56 | 10500 | 0.4948 | 0.2915 |
| 0.3222 | 16.3 | 11000 | 0.4870 | 0.3025 |
| 0.3081 | 17.04 | 11500 | 0.4779 | 0.2919 |
| 0.3099 | 17.78 | 12000 | 0.4927 | 0.2871 |
| 0.2485 | 18.52 | 12500 | 0.5013 | 0.2831 |
| 0.3163 | 19.26 | 13000 | 0.4929 | 0.2888 |
| 0.2555 | 20.0 | 13500 | 0.5234 | 0.2888 |
| 0.2705 | 20.74 | 14000 | 0.5259 | 0.2818 |
| 0.2632 | 21.48 | 14500 | 0.5105 | 0.2831 |
| 0.2374 | 22.22 | 15000 | 0.5284 | 0.2845 |
| 0.2565 | 22.96 | 15500 | 0.5237 | 0.2875 |
| 0.2394 | 23.7 | 16000 | 0.5368 | 0.2818 |
| 0.2458 | 24.44 | 16500 | 0.5386 | 0.2814 |
| 0.2383 | 25.19 | 17000 | 0.5366 | 0.2788 |
| 0.2152 | 25.93 | 17500 | 0.5320 | 0.2770 |
| 0.231 | 26.67 | 18000 | 0.5441 | 0.2779 |
| 0.2061 | 27.41 | 18500 | 0.5448 | 0.2796 |
| 0.245 | 28.15 | 19000 | 0.5413 | 0.2796 |
| 0.2119 | 28.89 | 19500 | 0.5379 | 0.2774 |
| 0.2155 | 29.63 | 20000 | 0.5405 | 0.2770 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bartowski/Flora_7B-GGUF | bartowski | 2024-03-06T02:48:22Z | 17 | 2 | transformers | [
"transformers",
"gguf",
"finetune",
"text-generation",
"en",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"base_model:jeiku/FloraBase",
"base_model:quantized:jeiku/FloraBase",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T02:34:06Z | ---
base_model:
- jeiku/FloraBase
- jeiku/Synthetic_Soul_1k_Mistral_128
library_name: transformers
tags:
- finetune
license: cc-by-sa-4.0
datasets:
- ResplendentAI/Synthetic_Soul_1k
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Flora_7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2334">b2334</a> for quantization.
Original model: https://huggingface.co/ResplendentAI/Flora_7B
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Flora_7B-Q8_0.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [Flora_7B-Q6_K.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [Flora_7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
| [Flora_7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
| [Flora_7B-Q5_0.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
| [Flora_7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
| [Flora_7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
| [Flora_7B-Q4_0.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
| [Flora_7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [Flora_7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [Flora_7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [Flora_7B-Q2_K.gguf](https://huggingface.co/bartowski/Flora_7B-GGUF/blob/main/Flora_7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
aditya11997/kandi2-decoder-3.2 | aditya11997 | 2024-03-06T02:47:47Z | 2 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"kandinsky",
"text-to-image",
"diffusers-training",
"dataset:kbharat7/DogChestXrayDatasetNew",
"base_model:kandinsky-community/kandinsky-2-2-decoder",
"base_model:finetune:kandinsky-community/kandinsky-2-2-decoder",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:KandinskyV22Pipeline",
"region:us"
] | text-to-image | 2024-03-05T20:26:13Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-decoder
datasets:
- kbharat7/DogChestXrayDatasetNew
prior:
- kandinsky-community/kandinsky-2-2-prior
tags:
- kandinsky
- text-to-image
- diffusers
- diffusers-training
inference: true
---
# Finetuning - aditya11997/kandi2-decoder-3.2
This pipeline was finetuned from **kandinsky-community/kandinsky-2-2-decoder** on the **kbharat7/DogChestXrayDatasetNew** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['photo of dogxraysmall']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("aditya11997/kandi2-decoder-3.2", torch_dtype=torch.float16)
prompt = "photo of dogxraysmall"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 43
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 768
* Mixed-precision: None
|
kharato/opt-125m-gptq_4 | kharato | 2024-03-06T02:43:41Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-03-06T02:43:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lmh2011/whisper-small-vi | lmh2011 | 2024-03-06T02:42:59Z | 60 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"vi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-04T07:35:59Z | ---
language:
- vi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Vi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kkimdev/llama-2-7b-bnb-4bit-3 | kkimdev | 2024-03-06T02:41:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-13b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-13b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T02:40:08Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-13b-bnb-4bit
---
# Uploaded model
- **Developed by:** kkimdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Verlocksss/q-FrozenLake-v1-4x4-noSlippery | Verlocksss | 2024-03-06T02:39:09Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T02:39:05Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Verlocksss/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kharato/opt-125m-gptq | kharato | 2024-03-06T02:38:24Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-03-06T02:38:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OwOOwO/eacc_3_9 | OwOOwO | 2024-03-06T02:37:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T02:34:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sampraxi/v5 | sampraxi | 2024-03-06T02:34:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T02:34:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OwOOwO/eacc_bm2c10 | OwOOwO | 2024-03-06T02:33:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T02:31:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
regmisaugat59/phi-1_5-finetuned | regmisaugat59 | 2024-03-06T02:28:44Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-03-06T02:05:13Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi-1_5-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
kody0525/Open-platypus-Commercial-SOLAR-10.7B-v1.0 | kody0525 | 2024-03-06T02:21:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"SOLAR-10.7B-v1.0",
"Open-platypus-Commercial",
"en",
"dataset:kyujinpy/Open-platypus-Commercial",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T01:59:23Z | ---
license: apache-2.0
language:
- en
tags:
- SOLAR-10.7B-v1.0
- Open-platypus-Commercial
pipeline_tag: text-generation
datasets:
- kyujinpy/Open-platypus-Commercial
base_model: upstage/SOLAR-10.7B-v1.0
model-index:
- name: Open-platypus-Commercial-SOLAR-10.7B-v1.0
results: []
---
Update @ 2024.03.05
# Open-platypus-Commercial-SOLAR-10.7B-v1.0
This model is a fine-tuned version of upstage/SOLAR-10.7B-v1.0
## Training hyperparameters
The following hyperparameters were used during training:
- batch_size = 16
- num_epochs = 1
- micro_batch = 1
- cutoff_len = 4096
- learning_rate = 4e-4
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.7
- Tokenizers 0.14.1 |
aka7774/ECCV2022-RIFE | aka7774 | 2024-03-06T02:21:51Z | 0 | 0 | null | [
"region:us"
] | null | 2023-03-29T08:30:39Z | # ECCV2022-RIFEと愉快なモデル
- モチベーションが足りないので実装しない
## 何をするもの?
- むかし Stable Diffusion で動画作ろうと思って試してたフレーム補完ツールを動かすコード
- 動画のフレーム補完をAIでいいかんじにするやつ
- animatediffのような涙ぐましい動画生成技術で足りないfpsを補うために使えそうなツール
- EasyPromptAnime に同梱されてるらしい
- 4倍くらいに補完したいけど、ヌルヌルしてて気持ち悪いらしい?
## インストール
- 本家は https://github.com/megvii-research/ECCV2022-RIFE らしい
- なぜかgoogle driveから(非公式の?)モデルをダウンロードしないと使えない
- https://drive.google.com/file/d/1APIzVeI-4ZZCEuIRE1m6WYfSCaOsi_7_/view
- 以前はモデルのダウンロードがなぜか baidu 経由しかなかったので苦労した
- 本家からリンクされている様々な派生製品があり、モデルの改善が続いているように見える
- どれを選べば最善かまでは調べてない
- torch, torchvisionが古いバージョンに固定されていた問題は現状では解消されている
## 動作方法
- pythonで実装されているがコマンドライン版しかない
- これを gradio や fastapi にするのがめんどくさい
- Windows用のスクリプトとバッチを配布している事例がある
- https://qiita.com/amaman/items/743c42d365a4e3bc155f
- これもコマンドラインを経由するのでいじるのがめんどい
- Super-SloMo という別のツールにも対応しているのが興味深い
## やりたかったこと
- 動画を投げると指定のfpsに変換してくれる space を作りたかった
- 上述の rife_video を移植するのが一番楽そうだけどかえってややこしくなりそうでもある
自分で使う予定が無いので一旦保留。
|
josephloh/donut-receipts75 | josephloh | 2024-03-06T02:19:22Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-03-06T01:57:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brescia/IndoBERT | brescia | 2024-03-06T02:16:02Z | 0 | 0 | null | [
"code",
"region:us"
] | null | 2024-03-02T07:03:12Z | ---
tags:
- code
---
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="brescia/IndoBERT") |
cookinai/Blitz-v0.1 | cookinai | 2024-03-06T02:15:08Z | 103 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T00:58:54Z | ---
license: cc-by-4.0
---
# Base finetune of [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on my [Kugelblitz Dataset](https://huggingface.co/datasets/cookinai/kugelblitz-alpha-v0.1)

Trained on only 1 epoch
V0.2 should be coming soon with some more epochs, if this one turns out well |
Litzy619/V0305B2 | Litzy619 | 2024-03-06T02:10:34Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-05T21:30:12Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305B2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305B2
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.352 | 0.09 | 10 | 2.3256 |
| 2.1754 | 0.17 | 20 | 1.8064 |
| 1.2745 | 0.26 | 30 | 0.6844 |
| 0.3789 | 0.34 | 40 | 0.1687 |
| 0.1587 | 0.43 | 50 | 0.1487 |
| 0.1563 | 0.51 | 60 | 0.1506 |
| 0.1505 | 0.6 | 70 | 0.1502 |
| 0.1525 | 0.68 | 80 | 0.1487 |
| 0.1481 | 0.77 | 90 | 0.1492 |
| 0.1504 | 0.85 | 100 | 0.1441 |
| 0.1501 | 0.94 | 110 | 0.1436 |
| 0.1439 | 1.02 | 120 | 0.1360 |
| 0.1411 | 1.11 | 130 | 0.1276 |
| 0.1349 | 1.19 | 140 | 0.1259 |
| 0.1345 | 1.28 | 150 | 0.1190 |
| 0.1299 | 1.37 | 160 | 0.1114 |
| 0.1275 | 1.45 | 170 | 0.1058 |
| 0.1159 | 1.54 | 180 | 0.1013 |
| 0.1189 | 1.62 | 190 | 0.0997 |
| 0.1203 | 1.71 | 200 | 0.1012 |
| 0.1177 | 1.79 | 210 | 0.0973 |
| 0.1144 | 1.88 | 220 | 0.0932 |
| 0.1128 | 1.96 | 230 | 0.0933 |
| 0.1084 | 2.05 | 240 | 0.0952 |
| 0.1081 | 2.13 | 250 | 0.0930 |
| 0.1037 | 2.22 | 260 | 0.0921 |
| 0.1011 | 2.3 | 270 | 0.0923 |
| 0.1072 | 2.39 | 280 | 0.0912 |
| 0.1058 | 2.47 | 290 | 0.0902 |
| 0.1107 | 2.56 | 300 | 0.0899 |
| 0.1066 | 2.65 | 310 | 0.0897 |
| 0.1091 | 2.73 | 320 | 0.0895 |
| 0.103 | 2.82 | 330 | 0.0893 |
| 0.1021 | 2.9 | 340 | 0.0893 |
| 0.103 | 2.99 | 350 | 0.0894 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
CatBarks/t5_es100SEC2_2 | CatBarks | 2024-03-06T02:08:27Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T02:05:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jlbaker361/cyberpunk-lora-500-e10-s90-stable-minimal | jlbaker361 | 2024-03-06T02:04:00Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2024-03-05T03:24:20Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - jlbaker361/cyberpunk-lora-500-e10-s90-stable-minimal
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the jlbaker361/cyberpunk-500-cropped dataset.
Training epochs = 10
num_train_timesteps = 90
url: https://wandb.ai/jlbaker361/text2image-fine-tune/runs/mympnw8e
lora scale: 1.0
tag_name: cyberpunk,anime
You can find some example images in the following.




































|
asn1814/openbookqa_bert-base-uncased_fact_retrieval_k_10 | asn1814 | 2024-03-06T02:01:49Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:asn1814/openbookqa_bert-base-uncased",
"base_model:finetune:asn1814/openbookqa_bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-06T01:21:46Z | ---
license: apache-2.0
base_model: asn1814/openbookqa_bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: openbookqa_bert-base-uncased_fact_retrieval_k_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openbookqa_bert-base-uncased_fact_retrieval_k_10
This model is a fine-tuned version of [asn1814/openbookqa_bert-base-uncased](https://huggingface.co/asn1814/openbookqa_bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9155
- Accuracy: 0.59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3035 | 1.0 | 310 | 1.4148 | 0.57 |
| 0.1243 | 2.0 | 620 | 1.9743 | 0.57 |
| 0.077 | 3.0 | 930 | 2.4690 | 0.584 |
| 0.028 | 4.0 | 1240 | 2.8887 | 0.582 |
| 0.0118 | 5.0 | 1550 | 2.9155 | 0.59 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
BlouseJury/Mistral-7B-Discord-0.1-DPO | BlouseJury | 2024-03-06T01:58:51Z | 9 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:BlouseJury/Mistral-7B-Discord-0.1",
"base_model:finetune:BlouseJury/Mistral-7B-Discord-0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T18:22:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: BlouseJury/Mistral-7B-Discord-0.1
model-index:
- name: Mistral-7B-Discord-0.1-DPO
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: BlouseJury/Mistral-7B-Discord-0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: Intel/orca_dpo_pairs
type:
system_prompt: ""
field_system: system
field_instruction: question
field_output: rejected
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# BlouseJury/Mistral-7B-Discord-0.1-DPO
This model is a fine-tuned version of [BlouseJury/Mistral-7B-Discord-0.1](https://huggingface.co/BlouseJury/Mistral-7B-Discord-0.1) on the Intel/orca_dpo_pairs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7923
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1157 | 0.01 | 1 | 1.1924 |
| 1.0146 | 0.26 | 19 | 0.8381 |
| 0.9004 | 0.51 | 38 | 0.8015 |
| 0.8425 | 0.77 | 57 | 0.7923 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.1-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.29|
|AI2 Reasoning Challenge (25-Shot)|63.23|
|HellaSwag (10-Shot) |83.27|
|MMLU (5-Shot) |62.62|
|TruthfulQA (0-shot) |55.28|
|Winogrande (5-shot) |78.93|
|GSM8k (5-shot) |30.40|
|
AdithyanRS/my-pet-dog | AdithyanRS | 2024-03-06T01:58:46Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T01:54:48Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by AdithyanRS following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpeg)
|
furrutiav/bert_qa_extractor_2022_ulra_by_question_ef_plus_nllf_v0_best_by_z_value_signal_it_136 | furrutiav | 2024-03-06T01:56:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T01:51:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gokuls/hubert-base-ls960-finetuned-ic-slurp-wt_init-frz | gokuls | 2024-03-06T01:54:27Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-05T15:37:58Z | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-finetuned-ic-slurp-wt_init-frz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-finetuned-ic-slurp-wt_init-frz
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0889
- Accuracy: 0.4598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.6605 | 1.0 | 527 | 3.6385 | 0.1020 |
| 3.6135 | 2.0 | 1055 | 3.5710 | 0.1200 |
| 3.4222 | 3.0 | 1582 | 3.3394 | 0.1738 |
| 3.1948 | 4.0 | 2110 | 3.2132 | 0.2052 |
| 2.8791 | 5.0 | 2637 | 2.9508 | 0.2581 |
| 2.7807 | 6.0 | 3165 | 2.7201 | 0.3109 |
| 2.4647 | 7.0 | 3692 | 2.6056 | 0.3393 |
| 2.3009 | 8.0 | 4220 | 2.4893 | 0.3816 |
| 2.0953 | 9.0 | 4747 | 2.4874 | 0.3902 |
| 1.8074 | 10.0 | 5275 | 2.4705 | 0.4035 |
| 1.8209 | 11.0 | 5802 | 2.4465 | 0.4177 |
| 1.4822 | 12.0 | 6330 | 2.5310 | 0.4228 |
| 1.426 | 13.0 | 6857 | 2.5097 | 0.4305 |
| 1.2877 | 14.0 | 7385 | 2.5365 | 0.4368 |
| 1.0833 | 15.0 | 7912 | 2.5874 | 0.4404 |
| 1.0709 | 16.0 | 8440 | 2.6478 | 0.4373 |
| 0.8176 | 17.0 | 8967 | 2.7096 | 0.4409 |
| 0.803 | 18.0 | 9495 | 2.7965 | 0.4491 |
| 0.6678 | 19.0 | 10022 | 2.9335 | 0.4470 |
| 0.7066 | 20.0 | 10550 | 3.0013 | 0.4408 |
| 0.5935 | 21.0 | 11077 | 2.9613 | 0.4544 |
| 0.5703 | 22.0 | 11605 | 2.9915 | 0.4534 |
| 0.5 | 23.0 | 12132 | 3.0625 | 0.4556 |
| 0.55 | 24.0 | 12660 | 3.0889 | 0.4598 |
| 0.3977 | 25.0 | 13187 | 3.1962 | 0.4551 |
| 0.4578 | 26.0 | 13715 | 3.2863 | 0.4574 |
| 0.3343 | 27.0 | 14242 | 3.3401 | 0.4531 |
| 0.4414 | 28.0 | 14770 | 3.3229 | 0.4557 |
| 0.2551 | 29.0 | 15297 | 3.4294 | 0.4567 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
kwchoi/DPO_mistral_7b_ultra_0124_v1 | kwchoi | 2024-03-06T01:45:13Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-25T00:21:32Z | ---
language:
- en
license: apache-2.0
model-index:
- name: DPO_mistral_7b_ultra_0124_v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
---
Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.45|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |86.39|
|MMLU (5-Shot) |59.78|
|TruthfulQA (0-shot) |69.45|
|Winogrande (5-shot) |79.48|
|GSM8k (5-shot) |25.47|
|
Adeptschneider/biomistral-finetuned-7b-v2.1-8-bit-gguf | Adeptschneider | 2024-03-06T01:41:37Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:Adeptschneider/biomistralv2.0-fine-tuned-model",
"base_model:quantized:Adeptschneider/biomistralv2.0-fine-tuned-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T01:37:24Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: Adeptschneider/biomistralv2.0-fine-tuned-model
---
# Uploaded model
- **Developed by:** Adeptschneider
- **License:** apache-2.0
- **Finetuned from model :** Adeptschneider/biomistralv2.0-fine-tuned-model
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2 | jungyuko | 2024-03-06T01:20:56Z | 55 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T01:21:13Z | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v1.2
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
asn1814/openbookqa_bert-base-uncased_fact_retrieval | asn1814 | 2024-03-06T01:20:19Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:asn1814/openbookqa_bert-base-uncased",
"base_model:finetune:asn1814/openbookqa_bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-05T08:34:53Z | ---
license: apache-2.0
base_model: asn1814/openbookqa_bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: openbookqa_bert-base-uncased_fact_retrieval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openbookqa_bert-base-uncased_fact_retrieval
This model is a fine-tuned version of [asn1814/openbookqa_bert-base-uncased](https://huggingface.co/asn1814/openbookqa_bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9008
- Accuracy: 0.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2123 | 1.0 | 155 | 1.7825 | 0.554 |
| 0.096 | 2.0 | 310 | 2.1296 | 0.57 |
| 0.0516 | 3.0 | 465 | 2.4470 | 0.566 |
| 0.0206 | 4.0 | 620 | 2.7527 | 0.56 |
| 0.0135 | 5.0 | 775 | 2.9008 | 0.57 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
farooqkhan2840503/gemma-Instruct-Finetune-simpleinput_20_0.001 | farooqkhan2840503 | 2024-03-06T01:10:16Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T00:47:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
euser/KANN-I-0.1-7b-GGUF | euser | 2024-03-06T01:08:50Z | 23 | 0 | null | [
"gguf",
"GGUF",
"base_model:euser/KANN-I-0.1-7b",
"base_model:quantized:euser/KANN-I-0.1-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-02-18T22:27:16Z | ---
tags:
- GGUF
base_model:
- euser/KANN-I-0.1-7b
---
# KANN-I-0.1-7b
- Model creator: [euser](https://huggingface.co/euser)
- Original model: [KANN-I-0.1-7b](https://huggingface.co/euser/KANN-I-0.1-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [euser's KANN-I-0.1-7b ](https://huggingface.co/euser/KANN-I-0.1-7b).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [kann-i-0.1-7b.Q2_K.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/kann-i-0.1-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [kann-i-0.1-7b.Q3_K_M.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/kann-i-0.1-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [KANN-I-0.1-7b.Q4_K_S.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/KANN-I-0.1-7b.Q4_K_S.gguf ) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [kann-i-0.1-7b.Q4_K_M.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/kann-i-0.1-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [kann-i-0.1-7b.Q5_K_M.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/kann-i-0.1-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [kann-i-0.1-7b.Q6_K.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/kann-i-0.1-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [kann-i-0.1-7b.Q8_0.gguf ](https://huggingface.co/euser/KANN-I-0.1-7b-GGUF/blob/main/kann-i-0.1-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | |
euser/wKAN-7b-GGUF | euser | 2024-03-06T01:08:06Z | 75 | 0 | null | [
"gguf",
"GGUF",
"base_model:euser/wKAN-7b",
"base_model:quantized:euser/wKAN-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T01:42:12Z | ---
tags:
- GGUF
base_model:
- euser/wKAN-7b
---
# wKAN-7b
- Model creator: [euser](https://huggingface.co/euser)
- Original model: [wKAN-7b](https://huggingface.co/euser/wKAN-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [euser's wKAN-7b ](https://huggingface.co/euser/wKAN-7b).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [wkan-7b.Q2_K.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wkan-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [wkan-7b.Q3_K_M.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wkan-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [wKAN-7b.Q4_K_S.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wKAN-7b.Q4_K_S.gguf ) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [wkan-7b.Q4_K_M.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wkan-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [wkan-7b.Q5_K_M.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wkan-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [wkan-7b.Q6_K.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wkan-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [wkan-7b.Q8_0.gguf ](https://huggingface.co/euser/wKAN-7b-GGUF/blob/main/wkan-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | |
bartowski/Flora_7B-exl2 | bartowski | 2024-03-06T01:07:57Z | 3 | 0 | transformers | [
"transformers",
"finetune",
"text-generation",
"en",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"base_model:jeiku/FloraBase",
"base_model:finetune:jeiku/FloraBase",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T00:55:58Z | ---
base_model:
- jeiku/FloraBase
- jeiku/Synthetic_Soul_1k_Mistral_128
library_name: transformers
tags:
- finetune
license: cc-by-sa-4.0
datasets:
- ResplendentAI/Synthetic_Soul_1k
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Flora_7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/ResplendentAI/Flora_7B/
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Flora_7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Flora_7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Flora_7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Flora_7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Flora_7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Flora_7B-exl2 Flora_7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Flora_7B-exl2`:
```shell
mkdir Flora_7B-exl2
huggingface-cli download bartowski/Flora_7B-exl2 --local-dir Flora_7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Flora_7B-exl2-6_5
huggingface-cli download bartowski/Flora_7B-exl2 --revision 6_5 --local-dir Flora_7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Flora_7B-exl2-6.5
huggingface-cli download bartowski/Flora_7B-exl2 --revision 6_5 --local-dir Flora_7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
GIZ/SUBTARGET_multilabel_bge | GIZ | 2024-03-06T00:55:17Z | 6 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:GIZ/policy_classification",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"co2_eq_emissions",
"region:us"
] | text-classification | 2024-02-17T15:47:11Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: >-
Unconditional Reduction The level of reduction planned unconditionally is
expected to be up to 35% by 2030 as compared to the Business As Usual (BAU)
scenario, taking 2005 as the reference year. Conditional Reduction In a
conditional mitigation scenario Angola plans to reduce further its
emissions. Therefore, the mitigation options identified in this scenario are
expected to reduce an additional 15% below BAU emission levels by 2030.
- text: >-
Measure 300 MW total installed biomass power capacity in the country by
Sector Energy GHG mitigation target 84 ktCO2e on average per year between
2020 and 2030 Monitoring procedures Newly added biomass capacity will be
monitored on an annual basis by the Department of Climate Change of the
Ministry of Natural Resources and Environment using data from the Ministry
of Energy and Mines Comments - Installed capacity as of 2019 is around
40MW Measure 30% Electric Vehicles penetration for 2-wheelers and
passengers cars in national vehicles mix Sector Transport GHG mitigation
target 30 ktCO2e on average per year between 2020 and 2030 Monitoring
procedures Share of Electric Vehicles in national vehicle mix will be
monitored on an annual basis by the Department of Climate Change of the
Ministry of Natural Resources and Environment using data from the Ministry
of Public Works and Transport.
- text: "� Australia adopts a target of net zero emissions by 2050. This is an economy-wide target,\_covering all sectors and gases included in Australia’s national inventory. � In order to achieve net zero by 2050, Australia commits to seven low emissions technology stretch goals - ambitious but realistic goals to bring priority low emissions technologies to economic parity with existing mature technologies."
- text: >-
The GoP has taken a series of major initiatives as outlined in chapters 4
and 5. Hence, Pakistan intends to set a cumulative ambitious conditional
target of overall 50% reduction of its projected emissions by 2030, with 15%
from the country’s own resources and 35% subject to provision of
international grant finance that would require USD 101 billion just for
energy transition. 7.1 HIGH PRIORITY ACTIONS Addressing the Global Climate
Summit at the United Nations in December 2020, the Prime Minister of
Pakistan made an announcement to reduce future GHG emissions on a high
priority basis if international financial and technical resources were made
available: MITIGATION: 1.
- text: >-
This document enfolds Iceland’s first communication on its long-term
strategy (LTS), to be updated when further analysis and policy documents are
published on the matter. Iceland is committed to reducing its overall
greenhouse gas emissions and reaching climate neutrality no later than 2040
and become fossil fuel free in 2050, which should set Iceland on a path to
net negative emissions.
pipeline_tag: text-classification
inference: false
co2_eq_emissions:
emissions: 268.4261122496047
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) CPU @ 2.20GHz
ram_total_size: 12.674789428710938
hours_used: 2.03
hardware_used: 1 x Tesla V100-SXM2-16GB
base_model: BAAI/bge-base-en-v1.5
datasets:
- GIZ/policy_classification
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 3 labels -
GHGLabel, NetzeroLabel, NonGHGLabel- that are relevant to a particular task or application
- **GHGLabel**: GHG targets refer to contributions framed as targeted \
outcomes in GHG terms
- **NetzeroLabel**: Identifies if it contains Netzero Target or not.
- **NonGHGLabel**: Target not in terms of GHG, like energy efficiency, expansion of Solar Energy production etc.
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("GIZ/SUBTARGET_multilabel_bge")
# Run inference
preds = model("This document enfolds Iceland’s first communication on its long-term strategy (LTS), to be updated when further analysis and policy documents are published on the matter. Iceland is committed to reducing its overall greenhouse gas emissions and reaching climate neutrality no later than 2040 and become fossil fuel free in 2050, which should set Iceland on a path to net negative emissions.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 19 | 78.5467 | 173 |
- Training Dataset: 728
| Class | Positive Count of Class|
|:-------------|:--------|
| GHGLabel | 440 |
| NetzeroLabel | 120 |
| NonGHGLabel | 259|
- Validation Dataset: 80
| Class | Positive Count of Class|
|:-------------|:--------|
| GHGLabel | 49 |
| NetzeroLabel | 11 |
| NonGHGLabel | 30|
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (1, 0)
- max_steps: -1
- sampling_strategy: undersampling
- body_learning_rate: (6.86e-06, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Embedding Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2227 | - |
| 0.1519 | 5000 | 0.015 | 0.0831 |
| 0.3038 | 10000 | 0.0146 | 0.0924 |
| 0.4557 | 15000 | 0.0197 | 0.0827 |
| 0.6076 | 20000 | 0.0031 | 0.0883 |
| 0.7595 | 25000 | 0.0439 | 0.0865 |
| 0.9114 | 30000 | 0.0029 | 0.0914 |
|label | precision |recall |f1-score| support|
|:-------------:|:---------:|:-----:|:------:|:------:|
|GHG |0.884 |0.938 |0.910 | 49.0 |
|Netzero |0.846 |1.000 |0.916 | 11.0 |
|NonGHG |0.903 |0.933 |0.918 | 30.0 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.268 kg of CO2
- **Hours Used**: 2.03 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x Tesla V100-SXM2-16GB
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.20GHz
- **RAM Size**: 12.67 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
DanielClough/Candle_MistralLite | DanielClough | 2024-03-06T00:54:46Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"en",
"dataset:amazon/MistralLite",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-21T00:48:47Z | ---
datasets:
- amazon/MistralLite
language:
- en
pipeline_tag: text-generation
license: apache-2.0
---
This repo includes `.gguf` built for HuggingFace/Candle.
They will not work with `llama.cpp`.
This model should be used with the `Config` [`config_chat_ml`](
https://github.com/huggingface/candle/blob/main/candle-transformers/src/models/mistral.rs).
Refer to the [original repo](https://huggingface.co/amazon/MistralLite) for more details.
|
DanielClough/Candle_Mistral-7B-Instruct-v0.1 | DanielClough | 2024-03-06T00:50:57Z | 120 | 3 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-10T08:50:30Z | ---
license: apache-2.0
---
Here we have `.gguf` and `.safetensors` of `mistralai/Mistral-Instruct-v0.1` for use with `Huggingface/Candle`.
You can try the models with [Candle Chat](https://github.com/danielclough/candle_chat), or make similar models with [Candle Tensor Tools](https://github.com/danielclough/Candle_Tensor-Tools).
Refer to the main [model card](https://huggingface.co/mistralai/Mistral-Instruct-v0.1).
|
furrutiav/bert_qa_extractor_2022_ulra_by_kmeans_Q_nllf_ef_plus_nllf_v0_best_by_z_value_signal_it_150 | furrutiav | 2024-03-06T00:50:14Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T00:48:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
furrutiav/bert_qa_extractor_2022_ulra_by_kmeans_Q_nllf_ef_plus_nllf_best_by_z_value_signal_it_146 | furrutiav | 2024-03-06T00:50:03Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T00:48:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
state-spaces/mamba-2.8b-hf | state-spaces | 2024-03-06T00:44:55Z | 5,228 | 98 | transformers | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T23:53:30Z | ---
library_name: transformers
tags: []
---
# Mamba
<!-- Provide a quick summary of what the model is/does. -->
This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo.
# Usage
You need to install `transformers` from `main` until `transformers=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```
If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.
## Generation
You can use the classic `generate` API:
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-2.8b-hf")
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-2.8b-hf")
>>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
>>> out = model.generate(input_ids, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["Hey how are you doing?\n\nI'm doing great.\n\nI"]
```
## PEFT finetuning example
In order to finetune using the `peft` library, we recommend keeping the model in float32!
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-2.8b-hf")
model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-2.8b-hf")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
``` |
state-spaces/mamba-1.4b-hf | state-spaces | 2024-03-06T00:44:32Z | 3,130 | 10 | transformers | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T23:56:34Z | ---
library_name: transformers
tags: []
---
# Mamba
<!-- Provide a quick summary of what the model is/does. -->
This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo.
# Usage
You need to install `transformers` from `main` until `transformers=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```
If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.
## Generation
You can use the classic `generate` API:
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-1.4b-hf")
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-1.4b-hf")
>>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
>>> out = model.generate(input_ids, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["Hey how are you doing?\n\nI'm doing great.\n\nI"]
```
## PEFT finetuning example
In order to finetune using the `peft` library, we recommend keeping the model in float32!
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-1.4b-hf")
model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-1.4b-hf")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
``` |
state-spaces/mamba-790m-hf | state-spaces | 2024-03-06T00:44:06Z | 1,488 | 3 | transformers | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T00:07:54Z | ---
library_name: transformers
tags: []
---
# Mamba
<!-- Provide a quick summary of what the model is/does. -->
This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo.
# Usage
You need to install `transformers` from `main` until `transformers=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```
If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.
## Generation
You can use the classic `generate` API:
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-790m-hf")
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-790m-hf")
>>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
>>> out = model.generate(input_ids, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["Hey how are you doing?\n\nI'm good.\n\nHow are"]
```
## PEFT finetuning example
In order to finetune using the `peft` library, we recommend keeping the model in float32!
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-790m-hf")
model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-790m-hf")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
``` |
state-spaces/mamba-370m-hf | state-spaces | 2024-03-06T00:40:36Z | 2,486 | 13 | transformers | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T00:08:03Z | ---
library_name: transformers
tags: []
---
# Mamba
<!-- Provide a quick summary of what the model is/does. -->
This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo.
# Usage
You need to install `transformers` from `main` until `transformers=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```
If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.
## Generation
You can use the classic `generate` API:
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-370m-hf")
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-370m-hf")
>>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
>>> out = model.generate(input_ids, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["Hey how are you doing?\n\nI'm doing great.\n\nI"]
```
## PEFT finetuning example
In order to finetune using the `peft` library, we recommend keeping the model in float32!
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-370m-hf")
model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-370m-hf")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
``` |
Naoto0405/sd-class-butterflies-32 | Naoto0405 | 2024-03-06T00:35:38Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-03-06T00:35:25Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Naoto0405/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
gokuls/hubert-base-ls960-finetuned-ic-slurp-wt_init | gokuls | 2024-03-06T00:25:20Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-05T14:58:15Z | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-finetuned-ic-slurp-wt_init
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-finetuned-ic-slurp-wt_init
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1377
- Accuracy: 0.4604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.9613 | 1.0 | 527 | 3.8944 | 0.0803 |
| 3.7817 | 2.0 | 1055 | 3.7275 | 0.0910 |
| 3.6357 | 3.0 | 1582 | 3.5410 | 0.1308 |
| 3.4527 | 4.0 | 2110 | 3.3426 | 0.1676 |
| 3.0715 | 5.0 | 2637 | 3.0751 | 0.2331 |
| 2.9153 | 6.0 | 3165 | 2.8168 | 0.2969 |
| 2.5333 | 7.0 | 3692 | 2.6229 | 0.3375 |
| 2.3807 | 8.0 | 4220 | 2.5673 | 0.3620 |
| 2.181 | 9.0 | 4747 | 2.4933 | 0.3835 |
| 1.9118 | 10.0 | 5275 | 2.4411 | 0.4046 |
| 1.9015 | 11.0 | 5802 | 2.4254 | 0.4126 |
| 1.5811 | 12.0 | 6330 | 2.4216 | 0.4275 |
| 1.491 | 13.0 | 6857 | 2.4833 | 0.4284 |
| 1.3697 | 14.0 | 7385 | 2.5243 | 0.4368 |
| 1.1232 | 15.0 | 7912 | 2.5944 | 0.4309 |
| 1.1071 | 16.0 | 8440 | 2.6475 | 0.4317 |
| 0.9439 | 17.0 | 8967 | 2.6379 | 0.4449 |
| 0.917 | 18.0 | 9495 | 2.7438 | 0.4468 |
| 0.7628 | 19.0 | 10022 | 2.7671 | 0.4513 |
| 0.7642 | 20.0 | 10550 | 2.8993 | 0.4418 |
| 0.6716 | 21.0 | 11077 | 2.9354 | 0.4472 |
| 0.6166 | 22.0 | 11605 | 2.9961 | 0.4510 |
| 0.4819 | 23.0 | 12132 | 3.0959 | 0.4451 |
| 0.5903 | 24.0 | 12660 | 3.0542 | 0.4557 |
| 0.515 | 25.0 | 13187 | 3.0723 | 0.4589 |
| 0.518 | 26.0 | 13715 | 3.1377 | 0.4604 |
| 0.3902 | 27.0 | 14242 | 3.2230 | 0.4524 |
| 0.4825 | 28.0 | 14770 | 3.2925 | 0.4583 |
| 0.29 | 29.0 | 15297 | 3.4027 | 0.4498 |
| 0.2789 | 30.0 | 15825 | 3.3573 | 0.4598 |
| 0.3202 | 31.0 | 16352 | 3.4381 | 0.4542 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
bovision/distilgpt2-finetuned-wikitext2 | bovision | 2024-03-06T00:24:27Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T23:24:40Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 290 | 3.3948 |
| 3.5536 | 2.0 | 580 | 3.3654 |
| 3.5536 | 3.0 | 870 | 3.3608 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
|
Corianas/Neural-Mistral-7B | Corianas | 2024-03-06T00:22:42Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T15:22:01Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
language:
- en
---
# Model Card for Model ID
This is a DPO finetune of Mistral 7b-instruct0.2 following the article: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
## Model Details
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Corianas
- **Model type:** [More Information Needed]
- **License:** Apache 2.0
- **Finetuned from model: mistralai/Mistral-7B-Instruct-v0.2
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Intel/orca_dpo_pairs
### Training Procedure
https://medium.com/towards-data-science/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
#### Preprocessing [optional]
def chatml_format(example):
# Format system
if len(example['system']) > 0:
message = {"role": "user", "content": f"{example['system']}\n{example['question']}"}
prompt = tokenizer.apply_chat_template([message], tokenize=False)
else:
# Format instruction
message = {"role": "user", "content": example['question']}
prompt = tokenizer.apply_chat_template([message], tokenize=False, add_generation_prompt=True)
# Format chosen answer
chosen = example['chosen'] + tokenizer.eos_token
# Format rejected answer
rejected = example['rejected'] + tokenizer.eos_token
return {
"prompt": prompt,
"chosen": chosen,
"rejected": rejected,
}
#### Training Hyperparameters
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
furrutiav/bert_qa_extractor_2022_ulra_by_question_type_ef_plus_nllf_v0_best_by_z_value_signal_it_142 | furrutiav | 2024-03-06T00:05:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T00:04:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kkimdev/solar-10.7b-bnb-4bit-4 | kkimdev | 2024-03-06T00:00:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/solar-10.7b-bnb-4bit",
"base_model:finetune:unsloth/solar-10.7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-05T23:59:17Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/solar-10.7b-bnb-4bit
---
# Uploaded model
- **Developed by:** kkimdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/solar-10.7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pinzhenchen/sft-lora-zh-pythia-12b | pinzhenchen | 2024-03-05T23:54:19Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:54:15Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-pythia-12b | pinzhenchen | 2024-03-05T23:54:04Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:54:01Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
turboderp/StarCoder2-7B-exl2 | turboderp | 2024-03-05T23:53:57Z | 1 | 0 | null | [
"region:us"
] | null | 2024-03-05T23:51:09Z | EXL2 quants of [starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b).
[3.00 bits per weight](https://huggingface.co/turboderp/StarCoder2-7B-exl2/tree/3.0bpw)
[4.00 bits per weight](https://huggingface.co/turboderp/StarCoder2-7B-exl2/tree/4.0bpw)
[5.00 bits per weight](https://huggingface.co/turboderp/StarCoder2-7B-exl2/tree/5.0bpw)
[6.00 bits per weight](https://huggingface.co/turboderp/StarCoder2-7B-exl2/tree/6.0bpw)
[measurement.json](https://huggingface.co/turboderp/StarCoder2-7B-exl2/blob/main/measurement.json) |
pinzhenchen/sft-lora-bg-pythia-12b | pinzhenchen | 2024-03-05T23:53:54Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:53:50Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fr-pythia-6b9 | pinzhenchen | 2024-03-05T23:53:39Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:53:36Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-pythia-6b9 | pinzhenchen | 2024-03-05T23:53:35Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:53:31Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fi-pythia-2b8 | pinzhenchen | 2024-03-05T23:52:58Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"fi",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:55Z |
---
language:
- fi
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
* Instruction tuning language: Finnish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-cs-pythia-2b8 | pinzhenchen | 2024-03-05T23:52:41Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:39Z |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
* Instruction tuning language: Czech
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-pythia-2b8 | pinzhenchen | 2024-03-05T23:52:37Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:34Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-pythia-1b4 | pinzhenchen | 2024-03-05T23:52:33Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:29Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fr-pythia-1b4 | pinzhenchen | 2024-03-05T23:52:24Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:21Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fi-pythia-1b4 | pinzhenchen | 2024-03-05T23:52:20Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"fi",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:17Z |
---
language:
- fi
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
* Instruction tuning language: Finnish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-pythia-1b4 | pinzhenchen | 2024-03-05T23:52:16Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:13Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-pythia-1b4 | pinzhenchen | 2024-03-05T23:52:11Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:52:08Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-pythia-1b4 | pinzhenchen | 2024-03-05T23:51:59Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:56Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-pythia-1b | pinzhenchen | 2024-03-05T23:51:51Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:47Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-pythia-1b | pinzhenchen | 2024-03-05T23:51:32Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:30Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-pythia-1b | pinzhenchen | 2024-03-05T23:51:21Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:18Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-pythia-410m | pinzhenchen | 2024-03-05T23:51:17Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:13Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-pythia-410m | pinzhenchen | 2024-03-05T23:50:56Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:53Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-pythia-160m | pinzhenchen | 2024-03-05T23:50:40Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:36Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-pythia-160m | pinzhenchen | 2024-03-05T23:50:35Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:31Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
gokuls/wav2vec2-base-finetuned-ic-slurp-wt_init-frz | gokuls | 2024-03-05T23:50:33Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-05T15:42:54Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ic-slurp-wt_init-frz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ic-slurp-wt_init-frz
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8656
- Accuracy: 0.0665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7043 | 1.0 | 527 | 3.9874 | 0.0430 |
| 3.6973 | 2.0 | 1055 | 3.8656 | 0.0665 |
| 3.6275 | 3.0 | 1582 | 4.3487 | 0.0104 |
| 3.4852 | 4.0 | 2110 | 4.1588 | 0.0525 |
| 3.8932 | 5.0 | 2637 | 3.8819 | 0.0627 |
| 3.9246 | 6.0 | 3165 | 3.8627 | 0.0627 |
| 3.8914 | 7.0 | 3692 | 3.8517 | 0.0627 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
pinzhenchen/sft-lora-fr-pythia-160m | pinzhenchen | 2024-03-05T23:50:30Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:27Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-pythia-160m | pinzhenchen | 2024-03-05T23:50:22Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:19Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-pythia-160m | pinzhenchen | 2024-03-05T23:50:18Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:15Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-de-pythia-160m | pinzhenchen | 2024-03-05T23:50:14Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:11Z |
---
language:
- de
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-pythia-160m | pinzhenchen | 2024-03-05T23:50:06Z | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:03Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
Subsets and Splits