modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 12:27:51
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 12:25:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
teowu/llava_v1.5_7b_qinstruct_preview_v0.1 | teowu | 2023-11-14T06:28:29Z | 107 | 4 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"en",
"dataset:teowu/Q-Instruct",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2311.06783",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-29T03:10:35Z | ---
license: apache-2.0
datasets:
- teowu/Q-Instruct
- liuhaotian/LLaVA-Instruct-150K
language:
- en
---
This is a preview version of the Q-Instruct LLaVA. Non-finalized weights.
```bibtex
@misc{wu2023qinstruct,
title={Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models},
author={Haoning Wu and Zicheng Zhang and Erli Zhang and Chaofeng Chen and Liang Liao and Annan Wang and Kaixin Xu and Chunyi Li and Jingwen Hou and Guangtao Zhai and Geng Xue and Wenxiu Sun and Qiong Yan and Weisi Lin},
year={2023},
eprint={2311.06783},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
teowu/llava_v1.5_13b_qinstruct_preview_v0.1 | teowu | 2023-11-14T06:27:50Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"dataset:teowu/Q-Instruct",
"arxiv:2311.06783",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-30T15:20:21Z | ---
license: apache-2.0
datasets:
- teowu/Q-Instruct
---
This is a preview version of the Q-Instruct LLaVA. Non-finalized weights.
```
@misc{wu2023qinstruct,
title={Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models},
author={Haoning Wu and Zicheng Zhang and Erli Zhang and Chaofeng Chen and Liang Liao and Annan Wang and Kaixin Xu and Chunyi Li and Jingwen Hou and Guangtao Zhai and Geng Xue and Wenxiu Sun and Qiong Yan and Weisi Lin},
year={2023},
eprint={2311.06783},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
norip76/qlora-koalpaca | norip76 | 2023-11-14T06:25:51Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/polyglot-ko-1.3b",
"base_model:adapter:EleutherAI/polyglot-ko-1.3b",
"region:us"
]
| null | 2023-11-14T06:25:48Z | ---
library_name: peft
base_model: EleutherAI/polyglot-ko-1.3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
LinaAlhuri/Arabic-clip-vit-base-patch16 | LinaAlhuri | 2023-11-14T06:17:28Z | 5 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"hybrid-clip",
"text-to-image",
"ar",
"endpoints_compatible",
"region:us"
]
| text-to-image | 2023-04-20T03:18:03Z | ---
language:
- ar
pipeline_tag: text-to-image
---
## Model Details
Arabic CLIP is an adaptation of the Contrastive Language-Image Pre-training (CLIP) for the Arabic language. CLIP is an OpenAI-developed model that learns conceptual concepts from images and relates them with textual descriptions. This work attempts to improve the model's understanding and interpretation of visual information in the context of the Arabic language.
## Model Use
```python
from transformers import AutoTokenizer, FlaxVisionTextDualEncoderModel
model = FlaxVisionTextDualEncoderModel.from_pretrained("LinaAlhuri/Arabic-clip-vit-base-patch16", logit_scale_init_value=1,from_pt=True)
model.save_pretrained("arabic_clip")
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-base-arabic", cache_dir=None, use_fast=True)
```
## Data
The aim was to create a comprehensive Arabic image-text dataset by combining various data sources due to the scarcity of Arabic resources. Challenges included limited Arabic data and the quality of translated datasets. The approach involved merging genuine datasets for rich information and using translated datasets to cover diverse domains, scenarios, and objects, striking a balance between their respective pros and cons.
| Dataset name | Images |
| --- | --- |
|Arabic Conceptual Captions |1,427,210|
|Arabic COCO 2014 |414,113|
|Arabic WIT |109,366|
|Arabic Flicker8K |24,272|
|Proposed (WAP) dataset |151,252|
|Total |2,126,213|
## Performance and Limitations
We have tested the efficacy of Arabic CLIP across different benchmarks tailored for tasks like zero-shot learning, image retrieval, localization, and image search.
- Conceptual Captions
- COCO
- ImageNet
- Unsplash
## Limitations
To summarize the limitations into points
- Arabic CLIP struggles to count after 3.
- Limited genuine samples for the Arabic language.
- Various noises and biases might be introduced into Arabic CLIP because no studies have been conducted yet to address this issue in the published Arabic dataset or Arabic language models.
### Bias
For gender bias, it is important to note that Arabic uses a two-gender system in which all nouns are classified as masculine or feminine.
However, this is not the case for English. Translating the text from English to Arabic may result in information loss or even make it prone to gender bias.
|
LinaAlhuri/clip-bert-lit-two-stages | LinaAlhuri | 2023-11-14T06:16:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"text-to-image",
"ar",
"endpoints_compatible",
"region:us"
]
| text-to-image | 2023-04-28T09:27:45Z | ---
language:
- ar
pipeline_tag: text-to-image
---
## Model Details
Arabic CLIP is an adaptation of the Contrastive Language-Image Pre-training (CLIP) for the Arabic language. CLIP is an OpenAI-developed model that learns conceptual concepts from images and relates them with textual descriptions. This work attempts to improve the model's understanding and interpretation of visual information in the context of the Arabic language.
Rhis version of Arabic CLIP uses a training technique that consists of two stages where the first stage contains a frozen vision encoder. Then, the encoder is unfrozen in the second stage to allow finetuning.
## Model Use
```python
from transformers import VisionTextDualEncoderModel, AutoTokenizer
model = VisionTextDualEncoderModel.from_pretrained("LinaAlhuri/clip-bert-lit-two-stages")
model.save_pretrained("arabic_clip")
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-base-arabic", cache_dir=None, use_fast=True)
```
## Data
The aim was to create a comprehensive Arabic image-text dataset by combining various data sources due to the scarcity of Arabic resources. Challenges included limited Arabic data and the quality of translated datasets. The approach involved merging genuine datasets for rich information and using translated datasets to cover diverse domains, scenarios, and objects, striking a balance between their respective pros and cons.
| Dataset name | Images |
| --- | --- |
|Arabic Conceptual Captions |1,427,210|
|Arabic COCO 2014 |414,113|
|Arabic WIT |109,366|
|Arabic Flicker8K |24,272|
|Proposed (WAP) dataset |151,252|
|Total |2,126,213|
## Performance and Limitations
We have tested the efficacy of Arabic CLIP across different benchmarks tailored for tasks like zero-shot learning, image retrieval, localization, and image search.
- Conceptual Captions
- COCO
- ImageNet
- Unsplash
### Zero-shot Learning
| Multilingual CLIP| Top 1 | Top 5 | Top 10 | Top 100 |
|-----------------------|---------|---------|---------|---------|
| **Short translation** | 10.10 | 21.99 | 26.70 | 47.57 |
| **Long translation** | 9.518 | 20.942 | 25.54 | 45.59 |
| Two-stages Arabic CLIP| Top 1 | Top 5 | Top 10 | Top 100 |
|----------------------|---------|---------|---------|---------|
| **Short translation** | 17.28 | 36.97 | 45.43 | 73.85 |
| **Long translation** | 15.52 | 34.74 | 43.49 | 72.28 |
### Image Retrieval
#### Conceptual Captions Evaluation
| Metric | MCLIP | Two-stages Arabic CLIP |
|---------|-------|-------------------------|
| **MRR@1** | 0.064 | 0.151 |
| **MRR@5** | 0.093 | 0.215 |
| **MRR@10** | 0.100 | 0.227 |
#### COCO Evaluation
| Metric | MCLIP | Two-stages Arabic CLIP |
|---------|-------|-------------------------|
| **MRR@1** | 0.043 | 0.062 |
| **MRR@5** | 0.068 | 0.097 |
| **MRR@10** | 0.074 | 0.106 |
## Limitations
To summarize the limitations into points
- Arabic CLIP struggles to count after 3.
- Limited genuine samples for the Arabic language.
- Various noises and biases might be introduced into Arabic CLIP because no studies have been conducted yet to address this issue in the published Arabic dataset or Arabic language models.
### Bias
For gender bias, it is important to note that Arabic uses a two-gender system in which all nouns are classified as masculine or feminine.
However, this is not the case for English. Translating the text from English to Arabic may result in information loss or even make it prone to gender bias.
|
SuperDan/Taxi-v3 | SuperDan | 2023-11-14T06:15:30Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-14T06:15:29Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.63
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SuperDan/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
communityai/apt-chat-yi-6b-sft-full | communityai | 2023-11-14T05:55:20Z | 22 | 1 | transformers | [
"transformers",
"safetensors",
"Yi",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:01-ai/Yi-6B",
"base_model:finetune:01-ai/Yi-6B",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-11-11T10:04:55Z | ---
license: other
base_model: 01-ai/Yi-6B
tags:
- generated_from_trainer
model-index:
- name: apt-chat-yi-6B-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# apt-chat-yi-6B-sft-full
This model is a fine-tuned version of [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0548 | 0.15 | 1368 | 1.0247 |
| 0.9254 | 1.15 | 2736 | 1.0677 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sjung/my_awesome_billsum_model | sjung | 2023-11-14T05:40:07Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-14T05:39:39Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1408
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5213
- Rouge1: 0.1408
- Rouge2: 0.0553
- Rougel: 0.1179
- Rougelsum: 0.118
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8080 | 0.1287 | 0.0386 | 0.1085 | 0.1088 | 19.0 |
| No log | 2.0 | 124 | 2.6015 | 0.1375 | 0.05 | 0.1156 | 0.1157 | 19.0 |
| No log | 3.0 | 186 | 2.5384 | 0.1417 | 0.0569 | 0.119 | 0.1192 | 19.0 |
| No log | 4.0 | 248 | 2.5213 | 0.1408 | 0.0553 | 0.1179 | 0.118 | 19.0 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sbaner24/vit-base-patch16-224-Trial006_007_008-YEL_STEM4 | sbaner24 | 2023-11-14T05:35:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-14T05:12:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-Trial006_007_008-YEL_STEM4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-Trial006_007_008-YEL_STEM4
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0236
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7015 | 1.0 | 4 | 0.6622 | 0.5769 |
| 0.7592 | 2.0 | 8 | 0.6507 | 0.5769 |
| 0.5858 | 3.0 | 12 | 0.6202 | 0.5673 |
| 0.5612 | 4.0 | 16 | 0.4974 | 0.7019 |
| 0.4916 | 5.0 | 20 | 0.4718 | 0.7692 |
| 0.4925 | 6.0 | 24 | 0.3628 | 0.8173 |
| 0.4221 | 7.0 | 28 | 0.3077 | 0.875 |
| 0.4043 | 8.0 | 32 | 0.2262 | 0.9135 |
| 0.3757 | 9.0 | 36 | 0.1462 | 0.9712 |
| 0.2699 | 10.0 | 40 | 0.1450 | 0.9327 |
| 0.2677 | 11.0 | 44 | 0.0960 | 0.9615 |
| 0.2279 | 12.0 | 48 | 0.0745 | 0.9808 |
| 0.2113 | 13.0 | 52 | 0.0645 | 0.9808 |
| 0.1821 | 14.0 | 56 | 0.0649 | 0.9808 |
| 0.2025 | 15.0 | 60 | 0.0434 | 0.9904 |
| 0.1684 | 16.0 | 64 | 0.0433 | 0.9904 |
| 0.1521 | 17.0 | 68 | 0.0564 | 0.9808 |
| 0.258 | 18.0 | 72 | 0.0236 | 1.0 |
| 0.2594 | 19.0 | 76 | 0.0212 | 1.0 |
| 0.2045 | 20.0 | 80 | 0.0160 | 1.0 |
| 0.1947 | 21.0 | 84 | 0.0411 | 0.9904 |
| 0.1871 | 22.0 | 88 | 0.0245 | 0.9904 |
| 0.1702 | 23.0 | 92 | 0.0161 | 1.0 |
| 0.1591 | 24.0 | 96 | 0.0124 | 1.0 |
| 0.1387 | 25.0 | 100 | 0.0137 | 0.9904 |
| 0.146 | 26.0 | 104 | 0.0094 | 1.0 |
| 0.1005 | 27.0 | 108 | 0.0067 | 1.0 |
| 0.1612 | 28.0 | 112 | 0.0051 | 1.0 |
| 0.1821 | 29.0 | 116 | 0.0052 | 1.0 |
| 0.1489 | 30.0 | 120 | 0.0059 | 1.0 |
| 0.1793 | 31.0 | 124 | 0.0057 | 1.0 |
| 0.1624 | 32.0 | 128 | 0.0079 | 1.0 |
| 0.1516 | 33.0 | 132 | 0.0071 | 1.0 |
| 0.1195 | 34.0 | 136 | 0.0054 | 1.0 |
| 0.1854 | 35.0 | 140 | 0.0065 | 1.0 |
| 0.1329 | 36.0 | 144 | 0.0036 | 1.0 |
| 0.0951 | 37.0 | 148 | 0.0025 | 1.0 |
| 0.1886 | 38.0 | 152 | 0.0060 | 1.0 |
| 0.1151 | 39.0 | 156 | 0.0025 | 1.0 |
| 0.1353 | 40.0 | 160 | 0.0045 | 1.0 |
| 0.1455 | 41.0 | 164 | 0.0035 | 1.0 |
| 0.1824 | 42.0 | 168 | 0.0036 | 1.0 |
| 0.1328 | 43.0 | 172 | 0.0051 | 1.0 |
| 0.1349 | 44.0 | 176 | 0.0043 | 1.0 |
| 0.1348 | 45.0 | 180 | 0.0039 | 1.0 |
| 0.1201 | 46.0 | 184 | 0.0042 | 1.0 |
| 0.1087 | 47.0 | 188 | 0.0049 | 1.0 |
| 0.0875 | 48.0 | 192 | 0.0063 | 1.0 |
| 0.1814 | 49.0 | 196 | 0.0069 | 1.0 |
| 0.1342 | 50.0 | 200 | 0.0064 | 1.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
|
SongChris/llama2-qlora-opposite-stance | SongChris | 2023-11-14T05:06:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
]
| null | 2023-11-14T03:54:36Z | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2.dev0
|
NathBat/Rumarino | NathBat | 2023-11-14T04:52:20Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-10-31T14:46:06Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of an abydo
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
bradrichardson/ppo-lunarlander-v2 | bradrichardson | 2023-11-14T04:35:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-14T04:35:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.14 +/- 19.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PiyushLavaniya/Llama2_Banker | PiyushLavaniya | 2023-11-14T04:23:32Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2023-11-10T06:36:34Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
I've fine-tuned a language model to be my virtual banker, tailored to understand financial nuances and navigate the intricacies of banking tasks.
## Model Details
I've fine-tuned LLama2, a Language Model, to function as my virtual banking assistant. This personalized AI understands the intricacies of financial tasks, allowing me to seamlessly instruct it for a range of banking activities. From transaction analysis to insights on investment opportunities, LLama2 has become my digital finance companion, making banking more efficient and tailored to my specific needs.
- **Finetuned from model:** meta-llama/Llama-2-7b-chat-hf
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/PiyushLavaniya/Finetuning-Llama2/blob/main/Llama2_Banker_Finetuned_Llama.ipynb
## Uses
The model's intended use is essential for ethical deployment. It's designed to assist users in tasks related to natural language understanding, generation, and text-based applications. Foreseeable users include developers, researchers, and businesses seeking advanced language processing capabilities. The model's impact extends to those directly interacting with its outputs, as well as downstream users affected by applications incorporating its features. Transparency in communicating the model's strengths, limitations, and potential biases is crucial to ensure responsible and informed usage by all stakeholders.
## How to Get Started with the Model
Use the code below to get started with the model.
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="PiyushLavaniya/Llama2_Banker")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("PiyushLavaniya/Llama2_Banker")
model = AutoModelForCausalLM.from_pretrained("PiyushLavaniya/Llama2_Banker")
## Training Details
### Training Data
Model is Finetuned on ssbuild/alpaca_finance_en
Fine-tuning model on the ssbuild/alpaca_finance_en dataset signifies a strategic customization for financial applications, possibly related to Alpaca Finance.
the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- **Training regime:**
adam_bits = 8
training_arguments = TrainingArguments(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 4,
run_name=f"deb-v2-xl-{adam_bits}bitAdam",
logging_steps = 20,
learning_rate = 2e-4,
fp16=True,
max_grad_norm = 0.3,
max_steps = 1200,
warmup_ratio = 0.03,
group_by_length=True,
lr_scheduler_type = "constant",
) <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
jolual2747/falcon7b-ecommerce | jolual2747 | 2023-11-14T04:16:03Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"falcon7b",
"text-generation",
"e-commerce",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:finetune:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-11-14T03:57:10Z | ---
license: apache-2.0
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
tags:
- falcon7b
- text-generation
- e-commerce
- generated_from_trainer
model-index:
- name: falcon7b-ecommerce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-ecommerce
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
temporary0-0name/run_4 | temporary0-0name | 2023-11-14T04:15:22Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-13T10:45:11Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: run_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.1704 | 0.27 | 50 | 8.0057 |
| 7.2118 | 0.55 | 100 | 6.6834 |
| 6.5244 | 0.82 | 150 | 6.3491 |
| 6.2201 | 1.1 | 200 | 6.0229 |
| 5.7189 | 1.37 | 250 | 5.1311 |
| 4.1268 | 1.65 | 300 | 2.9582 |
| 2.4963 | 1.92 | 350 | 1.7429 |
| 1.5611 | 2.2 | 400 | 1.0743 |
| 1.0537 | 2.47 | 450 | 0.7155 |
| 0.7665 | 2.75 | 500 | 0.5189 |
| 0.5947 | 3.02 | 550 | 0.4061 |
| 0.4782 | 3.29 | 600 | 0.3396 |
| 0.4161 | 3.57 | 650 | 0.2976 |
| 0.3785 | 3.84 | 700 | 0.2718 |
| 0.3491 | 4.12 | 750 | 0.2567 |
| 0.3319 | 4.39 | 800 | 0.2488 |
| 0.3286 | 4.67 | 850 | 0.2455 |
| 0.326 | 4.94 | 900 | 0.2449 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1
- Datasets 2.14.6
- Tokenizers 0.13.3
|
qiacheng/stable-diffusion-xl-base-1.0-lcm | qiacheng | 2023-11-14T03:59:54Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2023-11-14T03:33:49Z | ---
license: apache-2.0
---
SDXL-base-1.0 LDM: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
SDXL-base-1.0 LCM-Lora: https://huggingface.co/latent-consistency/lcm-lora-sdxl
SDXL model fused with LCM Lora, saved using diffusers .save_pretrained()
Sample usage:
```python
import torch
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained("qiacheng/stable-diffusion-xl-base-1.0-lcm")
prompt = "a cat"
height = 1024
width = 1024
steps = 6
guidance_scale = 1
output = pipe(prompt=prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=guidance_scale, output_type="pil")
```
|
d33plearning/blip2-20epochs-2mrandom1k | d33plearning | 2023-11-14T03:57:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
]
| null | 2023-11-14T03:57:47Z | ---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2.dev0
|
mb23/FullySpikingAutoEncoder | mb23 | 2023-11-14T03:45:48Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"fsae",
"feature-extraction",
"custom_code",
"en",
"license:mit",
"region:us"
]
| feature-extraction | 2023-11-14T03:15:18Z | ---
license: mit
language:
- en
---
# Fully Spiking Auto Encoder Model
* 作りかけ
* 誰か編集して?
|
sbaner24/vit-base-patch16-224-Trial006-YEL_STEM4 | sbaner24 | 2023-11-14T03:42:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-06-21T01:30:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-Trial006-YEL_STEM4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-Trial006-YEL_STEM4
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0984
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7587 | 0.57 | 1 | 0.6296 | 0.7111 |
| 0.6884 | 1.71 | 3 | 0.7730 | 0.4 |
| 0.6112 | 2.86 | 5 | 0.5194 | 0.8222 |
| 0.5566 | 4.0 | 7 | 0.6194 | 0.6444 |
| 0.5216 | 4.57 | 8 | 0.5428 | 0.6889 |
| 0.4488 | 5.71 | 10 | 0.3884 | 0.8222 |
| 0.4438 | 6.86 | 12 | 0.3301 | 0.8444 |
| 0.3897 | 8.0 | 14 | 0.2362 | 0.9111 |
| 0.3789 | 8.57 | 15 | 0.1942 | 0.9333 |
| 0.3484 | 9.71 | 17 | 0.3995 | 0.8222 |
| 0.2727 | 10.86 | 19 | 0.1636 | 0.9556 |
| 0.209 | 12.0 | 21 | 0.1489 | 0.9556 |
| 0.2253 | 12.57 | 22 | 0.1712 | 0.9111 |
| 0.2407 | 13.71 | 24 | 0.2239 | 0.9111 |
| 0.1615 | 14.86 | 26 | 0.0984 | 1.0 |
| 0.1735 | 16.0 | 28 | 0.1231 | 0.9111 |
| 0.179 | 16.57 | 29 | 0.1203 | 0.9111 |
| 0.1464 | 17.71 | 31 | 0.0422 | 1.0 |
| 0.1444 | 18.86 | 33 | 0.0409 | 1.0 |
| 0.1758 | 20.0 | 35 | 0.0394 | 1.0 |
| 0.199 | 20.57 | 36 | 0.0246 | 1.0 |
| 0.1525 | 21.71 | 38 | 0.0179 | 1.0 |
| 0.1536 | 22.86 | 40 | 0.0441 | 1.0 |
| 0.115 | 24.0 | 42 | 0.0836 | 0.9333 |
| 0.106 | 24.57 | 43 | 0.0654 | 0.9778 |
| 0.1267 | 25.71 | 45 | 0.0285 | 1.0 |
| 0.1264 | 26.86 | 47 | 0.0199 | 1.0 |
| 0.1554 | 28.0 | 49 | 0.0192 | 1.0 |
| 0.1456 | 28.57 | 50 | 0.0195 | 1.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
|
deepseek-ai/deepseek-coder-1.3b-base | deepseek-ai | 2023-11-14T03:32:27Z | 61,326 | 79 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-28T07:42:03Z | ---
license: other
license_name: deepseek-license
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-1.3b-base is a 1.3B parameter model with Multi-Head Attention trained on 1 trillion tokens.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### 1)Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
#### 2)Code Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = """<|fim▁begin|>def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = []
right = []
<|fim▁hole|>
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
```
#### 3)Repository Level Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = """#utils.py
import torch
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
def load_data():
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Standardize the data
scaler = StandardScaler()
X = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Convert numpy data to PyTorch tensors
X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.int64)
y_test = torch.tensor(y_test, dtype=torch.int64)
return X_train, X_test, y_train, y_test
def evaluate_predictions(y_test, y_pred):
return accuracy_score(y_test, y_pred)
#model.py
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
class IrisClassifier(nn.Module):
def __init__(self):
super(IrisClassifier, self).__init__()
self.fc = nn.Sequential(
nn.Linear(4, 16),
nn.ReLU(),
nn.Linear(16, 3)
)
def forward(self, x):
return self.fc(x)
def train_model(self, X_train, y_train, epochs, lr, batch_size):
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(self.parameters(), lr=lr)
# Create DataLoader for batches
dataset = TensorDataset(X_train, y_train)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
for epoch in range(epochs):
for batch_X, batch_y in dataloader:
optimizer.zero_grad()
outputs = self(batch_X)
loss = criterion(outputs, batch_y)
loss.backward()
optimizer.step()
def predict(self, X_test):
with torch.no_grad():
outputs = self(X_test)
_, predicted = outputs.max(1)
return predicted.numpy()
#main.py
from utils import load_data, evaluate_predictions
from model import IrisClassifier as Classifier
def main():
# Model training and evaluation
"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=140)
print(tokenizer.decode(outputs[0]))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
tommylam/POCA-soccerTwos | tommylam | 2023-11-14T03:28:38Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-11-14T03:22:13Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tommylam/POCA-soccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
XueHaoTay/dog | XueHaoTay | 2023-11-14T03:16:50Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
]
| null | 2023-11-14T02:36:54Z | ---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
liziyang625/bert-fine-tuned-cola | liziyang625 | 2023-11-14T03:12:34Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-13T15:34:45Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5691684038863919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7804
- Matthews Correlation: 0.5692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4613 | 1.0 | 1069 | 0.4303 | 0.5507 |
| 0.3238 | 2.0 | 2138 | 0.6988 | 0.5778 |
| 0.1973 | 3.0 | 3207 | 0.7804 | 0.5692 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.13.2
|
jolual2747/experiments | jolual2747 | 2023-11-14T03:06:45Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"falcon7b",
"text-generation",
"e-commerce",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:finetune:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-11-14T03:06:41Z | ---
license: apache-2.0
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
tags:
- falcon7b
- text-generation
- e-commerce
- generated_from_trainer
model-index:
- name: experiments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiments
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sbaner24/vit-base-patch16-224-Trial006-YEL_STEM1 | sbaner24 | 2023-11-14T03:06:18Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-06-20T23:32:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-Trial006-YEL_STEM1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-Trial006-YEL_STEM1
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0494
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7481 | 0.89 | 2 | 0.6901 | 0.5455 |
| 0.6788 | 1.78 | 4 | 0.6743 | 0.6364 |
| 0.71 | 2.67 | 6 | 0.5935 | 0.6909 |
| 0.5911 | 4.0 | 9 | 0.5331 | 0.7091 |
| 0.572 | 4.89 | 11 | 0.4855 | 0.7636 |
| 0.5206 | 5.78 | 13 | 0.5816 | 0.7273 |
| 0.5029 | 6.67 | 15 | 0.4695 | 0.7818 |
| 0.4806 | 8.0 | 18 | 0.4680 | 0.8 |
| 0.3945 | 8.89 | 20 | 0.4059 | 0.8364 |
| 0.356 | 9.78 | 22 | 0.3764 | 0.8727 |
| 0.3153 | 10.67 | 24 | 0.3162 | 0.9091 |
| 0.3563 | 12.0 | 27 | 0.2654 | 0.8909 |
| 0.2904 | 12.89 | 29 | 0.2471 | 0.9091 |
| 0.2425 | 13.78 | 31 | 0.2265 | 0.8909 |
| 0.2402 | 14.67 | 33 | 0.2225 | 0.8909 |
| 0.2309 | 16.0 | 36 | 0.1752 | 0.9273 |
| 0.294 | 16.89 | 38 | 0.1769 | 0.9273 |
| 0.2004 | 17.78 | 40 | 0.1878 | 0.9091 |
| 0.2229 | 18.67 | 42 | 0.1126 | 0.9455 |
| 0.2804 | 20.0 | 45 | 0.1212 | 0.9273 |
| 0.2113 | 20.89 | 47 | 0.0494 | 1.0 |
| 0.1744 | 21.78 | 49 | 0.0767 | 0.9818 |
| 0.1645 | 22.67 | 51 | 0.0531 | 0.9818 |
| 0.2322 | 24.0 | 54 | 0.0757 | 0.9455 |
| 0.1934 | 24.89 | 56 | 0.0302 | 1.0 |
| 0.2172 | 25.78 | 58 | 0.1479 | 0.9455 |
| 0.1918 | 26.67 | 60 | 0.0374 | 0.9818 |
| 0.1948 | 28.0 | 63 | 0.1695 | 0.9273 |
| 0.2099 | 28.89 | 65 | 0.0743 | 0.9818 |
| 0.155 | 29.78 | 67 | 0.0275 | 1.0 |
| 0.1563 | 30.67 | 69 | 0.0273 | 1.0 |
| 0.149 | 32.0 | 72 | 0.0628 | 0.9818 |
| 0.1767 | 32.89 | 74 | 0.0388 | 0.9818 |
| 0.1828 | 33.78 | 76 | 0.0289 | 1.0 |
| 0.1825 | 34.67 | 78 | 0.0522 | 0.9636 |
| 0.197 | 36.0 | 81 | 0.0187 | 1.0 |
| 0.1534 | 36.89 | 83 | 0.0200 | 1.0 |
| 0.2099 | 37.78 | 85 | 0.0269 | 0.9818 |
| 0.1694 | 38.67 | 87 | 0.0176 | 1.0 |
| 0.101 | 40.0 | 90 | 0.0140 | 1.0 |
| 0.1488 | 40.89 | 92 | 0.0162 | 1.0 |
| 0.184 | 41.78 | 94 | 0.0175 | 1.0 |
| 0.182 | 42.67 | 96 | 0.0148 | 1.0 |
| 0.1364 | 44.0 | 99 | 0.0139 | 1.0 |
| 0.1332 | 44.44 | 100 | 0.0137 | 1.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
|
zkdeng/convnextv2-tiny-1k-224-finetuned-combinedSpiders | zkdeng | 2023-11-14T02:52:31Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-14T02:52:19Z | ---
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
model-index:
- name: convnextv2-tiny-1k-224-finetuned-combinedSpiders
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-finetuned-combinedSpiders
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7478
- eval_accuracy: 0.8026
- eval_precision: 0.6420
- eval_recall: 0.4903
- eval_f1: 0.5256
- eval_runtime: 113.6488
- eval_samples_per_second: 236.853
- eval_steps_per_second: 14.809
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ndubey/llama-2-7b-mlabonne-enhanced | ndubey | 2023-11-14T02:46:52Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-09T11:29:59Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
startlightquyet/ppo-Huggy | startlightquyet | 2023-11-14T02:45:13Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-11-14T02:45:06Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: startlightquyet/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
higgsfield/hackenews-comments-mistral-7b | higgsfield | 2023-11-14T02:43:59Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-14T02:36:28Z | ---
{}
---
---
{ card_data }
---
# Model Card for MyCoolModel
This model does this and that.
higgsfield.ai/model/6552d81c52fcbb5bf6ac6b18
This model was created by [@{ author }](https://hf.co/{author}). |
TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ | TheBloke | 2023-11-14T02:26:19Z | 27 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-2",
"code",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:ehartford/samantha-data",
"arxiv:2310.06825",
"base_model:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"base_model:quantized:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-11-14T01:58:56Z | ---
base_model: uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
datasets:
- jondurbin/airoboros-2.2.1
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- ehartford/samantha-data
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: SpeechlessCoder
results:
- dataset:
name: HumanEval
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 34.146
verified: false
task:
type: text-generation
model_creator: Jiangwen Su
model_name: Speechless Mistral Dolphin Orca Platypus Samantha 7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- llama-2
- code
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Speechless Mistral Dolphin Orca Platypus Samantha 7B - GPTQ
- Model creator: [Jiangwen Su](https://huggingface.co/uukuguy)
- Original model: [Speechless Mistral Dolphin Orca Platypus Samantha 7B](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Jiangwen Su's Speechless Mistral Dolphin Orca Platypus Samantha 7B](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF)
* [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ`:
```shell
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ
huggingface-cli download TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ
huggingface-cli download TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jiangwen Su's Speechless Mistral Dolphin Orca Platypus Samantha 7B
<p><h1> speechless-mistral-dolphin-orca-platypus-samantha-7b </h1></p>
This model is a merge of ehartford/dolphin-2.1-mistral-7b, Open-Orca/Mistral-7B-OpenOrca, bhenrym14/mistral-7b-platypus-fp16 and ehartford/samantha-1.2-mistral-7b.
I'm very sorry for giving such a long and peculiar name. Originally, it was just my lazy behavior during the process of making models to easily distinguish various model and dataset combinations. I didn't expect the [previous model](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) ([Thebloke GPTQ Version](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ)) to be so popular. This time, based on some guys's request, I am releasing a model based on Mistral, and I have also inherited the style of the super long name along with it. Welcome to try the model, please refrain from harsh criticism if you don't like it.
Code: https://github.com/uukuguy/speechless
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | 34.146|
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
Mistral-7B-v0.1: 30.488
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 64.33 |
| HellaSwag | 84.4|
| MMLU | 63.72 |
| TruthfulQA | 52.52|
| Winogrande | 78.37 |
| GSM8K | 21.38 |
| DROP | 8.66 |
| Average | 53.34 |
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
``
KeyError: 'mistral'
``
- Or:
``
NotImplementedError: Cannot copy out of meta tensor; no data!
``
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.`
|
karawalla/mistral_b_karawalla_test | karawalla | 2023-11-14T02:13:12Z | 11 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2023-11-14T00:58:39Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2.dev0
|
ermu2001/ChatAnything | ermu2001 | 2023-11-14T02:12:57Z | 0 | 7 | diffusers | [
"diffusers",
"safetensors",
"region:us"
]
| null | 2023-09-04T20:40:19Z | # Fantastic Open-source Models
Welcome to ChatAnything! Here, we've made available a collection of models sourced from the Civitai and various other open-source communities. It's important to note that these models are not our own, and each comes with its own unique license.
## Licensing Information
Please refer to the specific license associated with each model. Understanding and adhering to these licenses is crucial for legal and ethical use.
Shout out to the amazing Open-source Community! Your contributions and dedication are truly remarkable 🥂.
Here are the sources of some of the Models in this repo:
- [Face-Landmark-ControlNet](https://huggingface.co/georgefen/Face-Landmark-ControlNet): An Awesome ControlNet with Face landmark using Stable Diffusion 1.5 as base Model.
- [Civitai](https://civitai.com/models) and [Huggingface_hub](https://huggingface.co/models): Find your ideal Image Generative Model on Civitai. These Communities are Crazy🥂. Here are Some Fantastic Derivatives of [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):
- [Game Icon Institute_mode](https://civitai.com/models/47800?modelVersionId=76533)
- [dreamshaper](https://civitai.com/models/4384/dreamshaper)
- [3D_Animation_Diffusion](https://civitai.com/models/118086?modelVersionId=128046)
- [anything-v5](https://huggingface.co/stablediffusionapi/anything-v5)
# Some Init Faces Generated
Explore the DATA directory to find a collection of generated faces. These examples provide a glimpse into the capabilities of our pipeline. Feel free to check them out for inspiration or to better understand the potential of these models.
We appreciate your interest and involvement in this open-source initiative. Together, we can create and share innovative solutions for the benefit of the community.
# Troubleshooting and Integration
If you encounter any issues or have questions, feel free to reach out. We're here to help and are committed to finding prompt solutions.
|
speechlessai/speechless-codellama-34b-v1.0 | speechlessai | 2023-11-14T02:11:45Z | 1,429 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"code",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.12950",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-12T12:45:55Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- garage-bAInd/Open-Platypus
tags:
- llama-2
- code
license: llama2
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 70.12
verified: false
---
<p><h1> speechless-codellama-34b-v1.0 </h1></p>
> 2023.10.06 [uukuguy/speechless-codellama-34b-v2.0](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0) release. humaneval-python pass@1: 75.61
Fine-tune the Phind/Phind-CodeLlama-34B with Dolphin (1% GPT4), Orca (1% GPT4) and Platypus (100%) datasets.
Code: https://github.com/uukuguy/speechless
| humaneval metrics | pass@1 |
| --- | --- |
| humaneval-python | 70.12 |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
Phind-CodeLlama-34B-v2: 71.95
WizardCoder-Python-34B-V1.0: 70.73
Phind-CodeLlama-34B-v1: 65.85
WizardCoder-Python-13B-V1.0: 62.19
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 52.47 |
| HellaSwag | 74.13 |
| MMLU | 53.47 |
| TruthfulQA | 47.14 |
| Average | 56.80 |
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-13b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 13B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
speechlessai/speechless-codellama-airoboros-orca-platypus-13b | speechlessai | 2023-11-14T02:10:21Z | 1,433 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-19T09:29:53Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
tags:
- llama-2
license: llama2
---
<p><h1> speechless-codellama-airoboros-orca-platypus-13b </h1></p>
Use the following dataset to fine-tune codellama/CodeLlama-13B in order to improve the model's reasoning and planning abilities.
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset.
- garage-bAInd/Open-Platypus: 100%
Code: https://github.com/uukuguy/speechless
| Metric | Value |
| --- | --- |
| humaneval-python | 49.39 |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 44.88 |
| HellaSwag | 67.7 |
| MMLU | 43.16 |
| TruthfulQA | 40.88 |
| Average | 49.15 |
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-13b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 13B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
pharaouk/untitled-7B | pharaouk | 2023-11-14T02:08:44Z | 1,413 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-03T22:22:22Z | ---
license: apache-2.0
---
Untitled-7B model TBA.
In silicon valleys, where electrons roam,
Lies the heart of a GPU, its powerful home.
A symphony of circuits, a dance of light,
Working in harmony, day and night.
Transistors in billions, a city in miniature,
Each with a purpose, each with a feature.
Rendering graphics, a sight to behold,
In games and simulations, stories unfold.
Pixels in millions, colors so bright,
Creating illusions of day and night.
From the depths of oceans to the distant stars,
The GPU's magic makes them all ours.
Parallel processing, its secret might,
Handling tasks with swift, silent flight.
In the realm of data, it finds its stride,
Aiding researchers worldwide.
Deep learning algorithms, it does empower,
Processing data hour by hour.
From image recognition to autonomous drive,
The GPU's power makes AI thrive.
A silent sentinel in the machine's core,
The GPU's tasks are never a bore.
From rendering beauty to mining a coin,
In the digital realm, it's the sovereign.
So here's to the GPU, in all its glory,
The unsung hero of the tech story.
In every pixel, in every frame,
Lives the echo of its name. |
TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF | TheBloke | 2023-11-14T01:59:42Z | 407 | 9 | transformers | [
"transformers",
"gguf",
"mistral",
"llama-2",
"code",
"text-generation",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:ehartford/samantha-data",
"arxiv:2310.06825",
"base_model:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"base_model:quantized:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"license:llama2",
"model-index",
"region:us"
]
| text-generation | 2023-11-14T00:30:32Z | ---
base_model: uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
datasets:
- jondurbin/airoboros-2.2.1
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- ehartford/samantha-data
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: SpeechlessCoder
results:
- dataset:
name: HumanEval
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 34.146
verified: false
task:
type: text-generation
model_creator: Jiangwen Su
model_name: Speechless Mistral Dolphin Orca Platypus Samantha 7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- llama-2
- code
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Speechless Mistral Dolphin Orca Platypus Samantha 7B - GGUF
- Model creator: [Jiangwen Su](https://huggingface.co/uukuguy)
- Original model: [Speechless Mistral Dolphin Orca Platypus Samantha 7B](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jiangwen Su's Speechless Mistral Dolphin Orca Platypus Samantha 7B](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF)
* [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q2_K.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_0.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_0.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q6_K.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [speechless-mistral-dolphin-orca-platypus-samantha-7b.Q8_0.gguf](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF/blob/main/speechless-mistral-dolphin-orca-platypus-samantha-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF and below it, a specific filename to download, such as: speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF", model_file="speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jiangwen Su's Speechless Mistral Dolphin Orca Platypus Samantha 7B
<p><h1> speechless-mistral-dolphin-orca-platypus-samantha-7b </h1></p>
This model is a merge of ehartford/dolphin-2.1-mistral-7b, Open-Orca/Mistral-7B-OpenOrca, bhenrym14/mistral-7b-platypus-fp16 and ehartford/samantha-1.2-mistral-7b.
I'm very sorry for giving such a long and peculiar name. Originally, it was just my lazy behavior during the process of making models to easily distinguish various model and dataset combinations. I didn't expect the [previous model](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) ([Thebloke GPTQ Version](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ)) to be so popular. This time, based on some guys's request, I am releasing a model based on Mistral, and I have also inherited the style of the super long name along with it. Welcome to try the model, please refrain from harsh criticism if you don't like it.
Code: https://github.com/uukuguy/speechless
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | 34.146|
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
Mistral-7B-v0.1: 30.488
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 64.33 |
| HellaSwag | 84.4|
| MMLU | 63.72 |
| TruthfulQA | 52.52|
| Winogrande | 78.37 |
| GSM8K | 21.38 |
| DROP | 8.66 |
| Average | 53.34 |
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
``
KeyError: 'mistral'
``
- Or:
``
NotImplementedError: Cannot copy out of meta tensor; no data!
``
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.`
<!-- original-model-card end -->
|
TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ | TheBloke | 2023-11-14T01:59:08Z | 11 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-2",
"code",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:ehartford/samantha-data",
"arxiv:2310.06825",
"base_model:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"base_model:quantized:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-14T00:30:31Z | ---
base_model: uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
datasets:
- jondurbin/airoboros-2.2.1
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- ehartford/samantha-data
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: SpeechlessCoder
results:
- dataset:
name: HumanEval
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 34.146
verified: false
task:
type: text-generation
model_creator: Jiangwen Su
model_name: Speechless Mistral Dolphin Orca Platypus Samantha 7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- llama-2
- code
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Speechless Mistral Dolphin Orca Platypus Samantha 7B - AWQ
- Model creator: [Jiangwen Su](https://huggingface.co/uukuguy)
- Original model: [Speechless Mistral Dolphin Orca Platypus Samantha 7B](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b)
<!-- description start -->
## Description
This repo contains AWQ model files for [Jiangwen Su's Speechless Mistral Dolphin Orca Platypus Samantha 7B](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-GGUF)
* [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/speechless-mistral-dolphin-orca-platypus-samantha-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jiangwen Su's Speechless Mistral Dolphin Orca Platypus Samantha 7B
<p><h1> speechless-mistral-dolphin-orca-platypus-samantha-7b </h1></p>
This model is a merge of ehartford/dolphin-2.1-mistral-7b, Open-Orca/Mistral-7B-OpenOrca, bhenrym14/mistral-7b-platypus-fp16 and ehartford/samantha-1.2-mistral-7b.
I'm very sorry for giving such a long and peculiar name. Originally, it was just my lazy behavior during the process of making models to easily distinguish various model and dataset combinations. I didn't expect the [previous model](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) ([Thebloke GPTQ Version](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ)) to be so popular. This time, based on some guys's request, I am releasing a model based on Mistral, and I have also inherited the style of the super long name along with it. Welcome to try the model, please refrain from harsh criticism if you don't like it.
Code: https://github.com/uukuguy/speechless
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | 34.146|
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
Mistral-7B-v0.1: 30.488
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 64.33 |
| HellaSwag | 84.4|
| MMLU | 63.72 |
| TruthfulQA | 52.52|
| Winogrande | 78.37 |
| GSM8K | 21.38 |
| DROP | 8.66 |
| Average | 53.34 |
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
``
KeyError: 'mistral'
``
- Or:
``
NotImplementedError: Cannot copy out of meta tensor; no data!
``
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.`
|
sbaner24/vit-base-patch16-224-Trial006_007_008-YEL_STEM3 | sbaner24 | 2023-11-14T01:57:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-14T01:40:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-Trial006_007_008-YEL_STEM3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-Trial006_007_008-YEL_STEM3
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0514
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7488 | 0.8 | 3 | 0.7067 | 0.5253 |
| 0.6693 | 1.87 | 7 | 0.6195 | 0.6263 |
| 0.5816 | 2.93 | 11 | 0.5114 | 0.7273 |
| 0.4669 | 4.0 | 15 | 0.3586 | 0.8990 |
| 0.3998 | 4.8 | 18 | 0.2867 | 0.9192 |
| 0.3579 | 5.87 | 22 | 0.2326 | 0.9596 |
| 0.3087 | 6.93 | 26 | 0.1684 | 0.9596 |
| 0.2448 | 8.0 | 30 | 0.1525 | 0.9697 |
| 0.207 | 8.8 | 33 | 0.1863 | 0.9394 |
| 0.2283 | 9.87 | 37 | 0.1259 | 0.9697 |
| 0.2354 | 10.93 | 41 | 0.1090 | 0.9495 |
| 0.2452 | 12.0 | 45 | 0.0999 | 0.9596 |
| 0.2373 | 12.8 | 48 | 0.0702 | 0.9899 |
| 0.1516 | 13.87 | 52 | 0.0749 | 0.9697 |
| 0.1893 | 14.93 | 56 | 0.0808 | 0.9798 |
| 0.1761 | 16.0 | 60 | 0.0514 | 1.0 |
| 0.1971 | 16.8 | 63 | 0.0954 | 0.9798 |
| 0.1801 | 17.87 | 67 | 0.0536 | 0.9899 |
| 0.2004 | 18.93 | 71 | 0.0836 | 0.9596 |
| 0.1899 | 20.0 | 75 | 0.0500 | 0.9899 |
| 0.2222 | 20.8 | 78 | 0.0874 | 0.9798 |
| 0.1341 | 21.87 | 82 | 0.0371 | 1.0 |
| 0.2187 | 22.93 | 86 | 0.0545 | 1.0 |
| 0.1547 | 24.0 | 90 | 0.0445 | 1.0 |
| 0.1307 | 24.8 | 93 | 0.0404 | 1.0 |
| 0.1563 | 25.87 | 97 | 0.0377 | 1.0 |
| 0.1467 | 26.93 | 101 | 0.0405 | 1.0 |
| 0.1413 | 28.0 | 105 | 0.0379 | 1.0 |
| 0.1317 | 28.8 | 108 | 0.0367 | 1.0 |
| 0.1264 | 29.87 | 112 | 0.0359 | 1.0 |
| 0.128 | 30.93 | 116 | 0.0328 | 1.0 |
| 0.1292 | 32.0 | 120 | 0.0356 | 1.0 |
| 0.1549 | 32.8 | 123 | 0.0339 | 1.0 |
| 0.1272 | 33.87 | 127 | 0.0296 | 1.0 |
| 0.1376 | 34.93 | 131 | 0.0295 | 1.0 |
| 0.1884 | 36.0 | 135 | 0.0308 | 1.0 |
| 0.1536 | 36.8 | 138 | 0.0295 | 1.0 |
| 0.166 | 37.87 | 142 | 0.0299 | 1.0 |
| 0.0929 | 38.93 | 146 | 0.0307 | 1.0 |
| 0.1592 | 40.0 | 150 | 0.0307 | 1.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
|
lmqg/mt5-base-zhquad-qg-ae-trimmed-50000 | lmqg | 2023-11-14T01:50:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-13T22:44:46Z | # Vocabulary Trimmed [lmqg/mt5-base-zhquad-qg-ae](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae): `lmqg/mt5-base-zhquad-qg-ae-trimmed-50000`
This model is a trimmed version of [lmqg/mt5-base-zhquad-qg-ae](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-base-zhquad-qg-ae | lmqg/mt5-base-zhquad-qg-ae-trimmed-50000 |
|:---------------------------|:-----------------------------|:-------------------------------------------|
| parameter_size_full | 582,384,384 | 275,032,320 |
| parameter_size_embedding | 384,155,136 | 76,803,072 |
| vocab_size | 250,101 | 50,002 |
| compression_rate_full | 100.0 | 47.23 |
| compression_rate_embedding | 100.0 | 19.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| zh | vocabtrimmer/mc4_validation | text | zh | validation | 50000 | 2 | |
spencerkifell/applicruiter-model | spencerkifell | 2023-11-14T01:45:20Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-11-14T01:30:00Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 60 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mogemboh/MistralFinalMaverick | mogemboh | 2023-11-14T01:43:41Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
]
| null | 2023-11-14T01:43:12Z | ---
library_name: peft
base_model: bn22/Mistral-7B-Instruct-v0.1-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2.dev0
|
higgsfield/roleplay-mistral-3 | higgsfield | 2023-11-14T01:38:39Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-14T01:30:59Z | ---
{}
---
---
{ card_data }
---
# Model Card for MyCoolModel
This model does this and that.
higgsfield.ai/model/6552cccc52ba0f508792f494
This model was created by [@{ author }](https://hf.co/{author}). |
picklehari/taxi | picklehari | 2023-11-14T01:33:27Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-14T01:31:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="picklehari/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Lagyamfi/int8-whisper-large-v2-asr | Lagyamfi | 2023-11-14T01:23:39Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"tw",
"dataset:lagyamfi/akan_audio",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-13T21:48:57Z | ---
language:
- tw
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- lagyamfi/akan_audio
model-index:
- name: Whisper Small Tw - Lagyamfi_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Tw - Lagyamfi_v2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the akan audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7315 | 1.0 | 137 | 0.7419 |
| 0.4855 | 2.0 | 274 | 0.5828 |
| 0.335 | 3.0 | 411 | 0.5613 |
| 0.2071 | 4.0 | 548 | 0.5593 |
| 0.129 | 5.0 | 685 | 0.5512 |
| 0.0691 | 6.0 | 822 | 0.5786 |
| 0.0394 | 7.0 | 959 | 0.5997 |
| 0.0168 | 8.0 | 1096 | 0.6296 |
| 0.0111 | 9.0 | 1233 | 0.6353 |
| 0.0082 | 10.0 | 1370 | 0.6401 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TheBloke/Noromaid-13B-v0.1.1-AWQ | TheBloke | 2023-11-14T01:15:40Z | 11 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:NeverSleep/Noromaid-13b-v0.1.1",
"base_model:quantized:NeverSleep/Noromaid-13b-v0.1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-13T23:01:44Z | ---
base_model: NeverSleep/Noromaid-13b-v0.1.1
inference: false
license: cc-by-nc-4.0
model_creator: NeverSleep
model_name: Noromaid 13B v0.1.1
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Noromaid 13B v0.1.1 - AWQ
- Model creator: [NeverSleep](https://huggingface.co/NeverSleep)
- Original model: [Noromaid 13B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1)
<!-- description start -->
## Description
This repo contains AWQ model files for [NeverSleep's Noromaid 13B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Noromaid-13B-v0.1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Noromaid-13B-v0.1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Noromaid-13B-v0.1.1-GGUF)
* [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's Noromaid 13B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Noromaid-13B-v0.1.1-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Noromaid-13B-v0.1.1-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Noromaid-13B-v0.1.1-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Noromaid-13B-v0.1.1-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Noromaid-13B-v0.1.1-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Noromaid-13B-v0.1.1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Noromaid-13B-v0.1.1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NeverSleep's Noromaid 13B v0.1.1

---
# Disclaimer:
## This is a ***TEST*** version, don't expect everything to work!!!
You may use our custom prompting format, or simple alpaca. **(Choose which fits best for you!)**
---
# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Tired of the same merges everytime? Here it is, the Noromaid-13b-v0.1.1 model. Suitable for RP, ERP and general stuff.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Noromaid-13b-v0.1.1.
## Changelog what should be fixed from the last version (0.1):
- Fixed somes issues where the model had a hard time grasping at the character card/persona, logical error and the following of the story/chat.
- Fixed some logical issue.
- Fixed some OOC leaking at the end of some reply (tested without stopping string).
- Fixed an obscure crash in Koboldcpp where the model refused to output anymore when context was full in some case.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Training data used:
- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new.
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
sbaner24/vit-base-patch16-224-Trial006-007-008-YEL_STEM2 | sbaner24 | 2023-11-14T01:09:43Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-12T01:29:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-Trial006-007-008-YEL_STEM2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-Trial006-007-008-YEL_STEM2
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0188
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7014 | 0.94 | 4 | 0.6789 | 0.5872 |
| 0.6242 | 1.88 | 8 | 0.6182 | 0.6147 |
| 0.5903 | 2.82 | 12 | 0.5147 | 0.8165 |
| 0.516 | 4.0 | 17 | 0.4137 | 0.8440 |
| 0.4612 | 4.94 | 21 | 0.3127 | 0.8899 |
| 0.3615 | 5.88 | 25 | 0.2448 | 0.8899 |
| 0.2899 | 6.82 | 29 | 0.2502 | 0.9083 |
| 0.3271 | 8.0 | 34 | 0.2268 | 0.8899 |
| 0.3765 | 8.94 | 38 | 0.1419 | 0.9266 |
| 0.2954 | 9.88 | 42 | 0.1233 | 0.9450 |
| 0.2187 | 10.82 | 46 | 0.1541 | 0.9358 |
| 0.3391 | 12.0 | 51 | 0.1575 | 0.9266 |
| 0.2027 | 12.94 | 55 | 0.0838 | 0.9908 |
| 0.2894 | 13.88 | 59 | 0.0852 | 0.9633 |
| 0.1972 | 14.82 | 63 | 0.0648 | 0.9908 |
| 0.1533 | 16.0 | 68 | 0.0884 | 0.9908 |
| 0.1588 | 16.94 | 72 | 0.0951 | 0.9633 |
| 0.1746 | 17.88 | 76 | 0.0793 | 0.9725 |
| 0.2386 | 18.82 | 80 | 0.1204 | 0.9633 |
| 0.1226 | 20.0 | 85 | 0.0502 | 0.9633 |
| 0.1901 | 20.94 | 89 | 0.0188 | 1.0 |
| 0.1305 | 21.88 | 93 | 0.0244 | 0.9908 |
| 0.1695 | 22.82 | 97 | 0.0186 | 1.0 |
| 0.2682 | 24.0 | 102 | 0.0460 | 0.9817 |
| 0.1956 | 24.94 | 106 | 0.0328 | 0.9817 |
| 0.1651 | 25.88 | 110 | 0.0493 | 0.9817 |
| 0.1841 | 26.82 | 114 | 0.0168 | 1.0 |
| 0.1897 | 28.0 | 119 | 0.0139 | 1.0 |
| 0.2088 | 28.94 | 123 | 0.0195 | 1.0 |
| 0.1386 | 29.88 | 127 | 0.0483 | 0.9817 |
| 0.1544 | 30.82 | 131 | 0.0258 | 0.9817 |
| 0.1293 | 32.0 | 136 | 0.0461 | 0.9725 |
| 0.1696 | 32.94 | 140 | 0.0207 | 0.9908 |
| 0.1793 | 33.88 | 144 | 0.0182 | 1.0 |
| 0.1545 | 34.82 | 148 | 0.0361 | 0.9817 |
| 0.141 | 36.0 | 153 | 0.0455 | 0.9817 |
| 0.138 | 36.94 | 157 | 0.0175 | 0.9908 |
| 0.1382 | 37.88 | 161 | 0.0359 | 0.9817 |
| 0.163 | 38.82 | 165 | 0.0463 | 0.9817 |
| 0.1374 | 40.0 | 170 | 0.0153 | 0.9908 |
| 0.1993 | 40.94 | 174 | 0.0116 | 1.0 |
| 0.1471 | 41.88 | 178 | 0.0199 | 0.9817 |
| 0.1414 | 42.82 | 182 | 0.0330 | 0.9817 |
| 0.1494 | 44.0 | 187 | 0.0228 | 0.9817 |
| 0.1367 | 44.94 | 191 | 0.0174 | 0.9817 |
| 0.1529 | 45.88 | 195 | 0.0162 | 0.9817 |
| 0.1365 | 46.82 | 199 | 0.0153 | 0.9908 |
| 0.1301 | 47.06 | 200 | 0.0152 | 0.9908 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
|
danabib/elsed_models-v0.0.2 | danabib | 2023-11-14T00:52:19Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-11-13T21:43:06Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-danabib/elsed_models-v0.0.2
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
|
TheBloke/dolphin-2_2-yi-34b-GPTQ | TheBloke | 2023-11-14T00:45:42Z | 31 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"base_model:cognitivecomputations/dolphin-2_2-yi-34b",
"base_model:quantized:cognitivecomputations/dolphin-2_2-yi-34b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-11-13T21:37:31Z | ---
base_model: ehartford/dolphin-2_2-yi-34b
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
inference: false
language:
- en
license: other
license_link: LICENSE
license_name: yi-license
model_creator: Eric Hartford
model_name: Dolphin 2.2 Yi 34B
model_type: yi
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin 2.2 Yi 34B - GPTQ
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Dolphin 2.2 Yi 34B](https://huggingface.co/ehartford/dolphin-2_2-yi-34b)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Eric Hartford's Dolphin 2.2 Yi 34B](https://huggingface.co/ehartford/dolphin-2_2-yi-34b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-2_2-yi-34b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/dolphin-2_2-yi-34b-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/dolphin-2_2-yi-34b-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `dolphin-2_2-yi-34b-GPTQ`:
```shell
mkdir dolphin-2_2-yi-34b-GPTQ
huggingface-cli download TheBloke/dolphin-2_2-yi-34b-GPTQ --local-dir dolphin-2_2-yi-34b-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir dolphin-2_2-yi-34b-GPTQ
huggingface-cli download TheBloke/dolphin-2_2-yi-34b-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir dolphin-2_2-yi-34b-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir dolphin-2_2-yi-34b-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/dolphin-2_2-yi-34b-GPTQ --local-dir dolphin-2_2-yi-34b-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/dolphin-2_2-yi-34b-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/dolphin-2_2-yi-34b-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `dolphin-2_2-yi-34b-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/dolphin-2_2-yi-34b-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/dolphin-2_2-yi-34b-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Eric Hartford's Dolphin 2.2 Yi 34B
Dolphin 2.2 🐬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2-Yi-34b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on Yi, and is subject to Yi license.
I used the llama compatible [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) as the base model.
Trained with 16k context.
You can load it as follows:
```
from transformers import LlamaForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ehartford/dolphin-2.2-yi-34b", trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained("ehartford/dolphin-2.2-yi-34b")
```
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
## Training
It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Thank you to Charles Goddard, for publishing llama-compatible Yi
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
Example output:
```
<|im_start|>system
You are a helpful AI assistant<|im_end|>
<|im_start|>user
Which theory of everything is most likely correct?<|im_end|>
<|im_start|>assistant
The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
|
deepaknh/falcon7b-DietNutrition_DogCare_v3 | deepaknh | 2023-11-14T00:45:29Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
]
| null | 2023-11-14T00:45:15Z | ---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
syntag/string2-string | syntag | 2023-11-14T00:35:33Z | 3 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
]
| text-classification | 2023-11-14T00:35:32Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# string2-string
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("syntag/string2-string")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 4
* Number of training documents: 20
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | life - make - adulting - worm - gives | 7 | 0_life_make_adulting_worm |
| 1 | like - bar - walk - matter - coding | 7 | 1_like_bar_walk_matter |
| 2 | break - version - vacation - told - succeed | 3 | 2_break_version_vacation_told |
| 3 | don - skeletons - shame - scientists - parallel | 3 | 3_don_skeletons_shame_scientists |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.24.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.4
* Pandas: 2.0.3
* Scikit-Learn: 1.3.1
* Sentence-transformers: 2.2.2
* Transformers: 4.34.1
* Numba: 0.58.1
* Plotly: 5.17.0
* Python: 3.10.12
|
Ka4on/us_test | Ka4on | 2023-11-14T00:23:37Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2023-11-14T00:23:13Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
zibajoon/20231108_layoutlm2_5k_3ep_Doc_NA_B | zibajoon | 2023-11-14T00:11:06Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"dataset:zibajoon/20231109_layoutlm2_5k_20_epochs",
"license:mit",
"endpoints_compatible",
"region:us"
]
| document-question-answering | 2023-11-02T21:21:55Z | ---
tags:
- generated_from_trainer
model-index:
- name: 20231102-20_epochs_layoutlmv2-base-uncased_finetuned_docvqa
results: []
license: mit
datasets:
- zibajoon/20231109_layoutlm2_5k_20_epochs
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20231102-20_epochs_layoutlmv2-base-uncased_finetuned_docvqa
This model was trained from scratch on the 1.2 Example dataset released by DocVQA.
It achieves the following results on the evaluation set:
- Loss: 2.9087
## Model description
This DocVQA model, built on the Layout LM v2 framework, represents an initial step in a series of experimental models aimed at document visual question answering. It's the "mini" version in a planned series, trained on a relatively small dataset of 1.2k samples (1,000 for training and 200 for testing) over 20 epochs. The training setup was modest, employing mixed precision (fp16), with manageable batch sizes and a focused approach to learning rate adjustment (warmup steps and weight decay). Notably, this model was trained without external reporting tools, emphasizing internal evaluation. As the first iteration in a progressive series that will later include medium (5k samples) and large (50k samples) models, this version serves as a foundational experiment, setting the stage for more extensive and complex models in the future.
## Intended uses & limitations
Experimental Only
## Training and evaluation data
Based on the sample 1.2 dataset released by DocVQA
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3689 | 3.51 | 100 | 3.7775 |
| 3.2761 | 7.02 | 200 | 3.3707 |
| 2.6415 | 10.53 | 300 | 3.0807 |
| 2.2233 | 14.04 | 400 | 3.0120 |
| 1.9586 | 17.54 | 500 | 2.9087 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.10.1
- Tokenizers 0.14.1 |
caseyhahn/gpt2-finetuned-genius-lyrics-special-tokens | caseyhahn | 2023-11-14T00:02:26Z | 60 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-13T20:56:27Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: caseyhahn/gpt2-finetuned-genius-lyrics-special-tokens
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# caseyhahn/gpt2-finetuned-genius-lyrics-special-tokens
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0036
- Validation Loss: 2.8726
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0036 | 2.8726 | 0 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
evkes/LLM-falc-deloitte | evkes | 2023-11-13T23:30:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
]
| null | 2023-11-13T22:52:22Z | ---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.1
|
MaxReynolds/textual_inversion_ann | MaxReynolds | 2023-11-13T23:29:18Z | 25 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-13T22:43:47Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - MaxReynolds/textual_inversion_ann
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




|
GuCuChiara/NLP-HIBA_DisTEMIST_fine_tuned_biobert | GuCuChiara | 2023-11-13T23:12:01Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:monologg/biobert_v1.1_pubmed",
"base_model:finetune:monologg/biobert_v1.1_pubmed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-13T22:57:24Z | ---
base_model: monologg/biobert_v1.1_pubmed
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-HIBA_DisTEMIST_fine_tuned_biobert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-HIBA_DisTEMIST_fine_tuned_biobert
This model is a fine-tuned version of [monologg/biobert_v1.1_pubmed](https://huggingface.co/monologg/biobert_v1.1_pubmed) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2325
- Precision: 0.5394
- Recall: 0.5010
- F1: 0.5195
- Accuracy: 0.9434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 71 | 0.2200 | 0.2689 | 0.2813 | 0.2750 | 0.9147 |
| No log | 2.0 | 142 | 0.1895 | 0.4126 | 0.3834 | 0.3975 | 0.9308 |
| No log | 3.0 | 213 | 0.1813 | 0.5319 | 0.3667 | 0.4341 | 0.9389 |
| No log | 4.0 | 284 | 0.1746 | 0.5085 | 0.4395 | 0.4715 | 0.9410 |
| No log | 5.0 | 355 | 0.1942 | 0.4868 | 0.4697 | 0.4781 | 0.9382 |
| No log | 6.0 | 426 | 0.1975 | 0.5246 | 0.4860 | 0.5046 | 0.9413 |
| No log | 7.0 | 497 | 0.2066 | 0.5207 | 0.4897 | 0.5047 | 0.9415 |
| 0.122 | 8.0 | 568 | 0.2201 | 0.5232 | 0.4956 | 0.5090 | 0.9420 |
| 0.122 | 9.0 | 639 | 0.2358 | 0.5178 | 0.5048 | 0.5112 | 0.9418 |
| 0.122 | 10.0 | 710 | 0.2325 | 0.5394 | 0.5010 | 0.5195 | 0.9434 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
zwaarcontrast/lilt-xlm-roberta-base-finetuned-DocLayNet-base_paragraphs_ml512-v1 | zwaarcontrast | 2023-11-13T23:11:12Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"base_model:zwaarcontrast/lilt-xlm-roberta-base",
"base_model:finetune:zwaarcontrast/lilt-xlm-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-13T20:25:24Z | ---
base_model: zwaarcontrast/lilt-xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: lilt-xlm-roberta-base-finetuned-DocLayNet-base_paragraphs_ml512-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-xlm-roberta-base-finetuned-DocLayNet-base_paragraphs_ml512-v1
This model is a fine-tuned version of [zwaarcontrast/lilt-xlm-roberta-base](https://huggingface.co/zwaarcontrast/lilt-xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
- Precision: 0.8277
- Recall: 0.8277
- F1: 0.8277
- Accuracy: 0.8277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.05 | 100 | 0.1162 | 0.6875 | 0.6875 | 0.6875 | 0.6875 |
| No log | 0.11 | 200 | 0.1097 | 0.6755 | 0.6755 | 0.6755 | 0.6755 |
| No log | 0.16 | 300 | 0.0866 | 0.7781 | 0.7781 | 0.7781 | 0.7781 |
| No log | 0.21 | 400 | 0.1018 | 0.7478 | 0.7478 | 0.7478 | 0.7478 |
| 0.0975 | 0.27 | 500 | 0.0645 | 0.8277 | 0.8277 | 0.8277 | 0.8277 |
| 0.0975 | 0.32 | 600 | 0.0767 | 0.7985 | 0.7985 | 0.7985 | 0.7985 |
| 0.0975 | 0.37 | 700 | 0.0758 | 0.7903 | 0.7903 | 0.7903 | 0.7903 |
| 0.0975 | 0.43 | 800 | 0.0862 | 0.7865 | 0.7865 | 0.7865 | 0.7865 |
| 0.0975 | 0.48 | 900 | 0.1243 | 0.6892 | 0.6892 | 0.6892 | 0.6892 |
| 0.1389 | 0.53 | 1000 | 0.0795 | 0.8256 | 0.8256 | 0.8256 | 0.8256 |
| 0.1389 | 0.59 | 1100 | 0.1683 | 0.4702 | 0.4702 | 0.4702 | 0.4702 |
| 0.1389 | 0.64 | 1200 | 0.1253 | 0.6636 | 0.6636 | 0.6636 | 0.6636 |
| 0.1389 | 0.69 | 1300 | 0.1044 | 0.7295 | 0.7295 | 0.7295 | 0.7295 |
| 0.1389 | 0.75 | 1400 | 0.1235 | 0.6477 | 0.6477 | 0.6477 | 0.6477 |
| 0.1018 | 0.8 | 1500 | 0.1113 | 0.7276 | 0.7276 | 0.7276 | 0.7276 |
| 0.1018 | 0.85 | 1600 | 0.1014 | 0.7317 | 0.7317 | 0.7317 | 0.7317 |
| 0.1018 | 0.91 | 1700 | 0.0963 | 0.7293 | 0.7293 | 0.7293 | 0.7293 |
| 0.1018 | 0.96 | 1800 | 0.1012 | 0.7323 | 0.7323 | 0.7323 | 0.7323 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Patt/bloom-560m-qa | Patt | 2023-11-13T23:10:52Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"region:us"
]
| null | 2023-11-13T23:10:48Z | ---
library_name: peft
base_model: bigscience/bloom-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2.dev0
|
922-CA/sd-monika-ddlc | 922-CA | 2023-11-13T23:07:11Z | 0 | 0 | null | [
"monika",
"ddlc",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-04T14:10:08Z | ---
license: creativeml-openrail-m
tags:
- monika
- ddlc
---
SD 1.5 dreambooth models. Checkpoint files trained on ~(number of images times 90) steps; images scraped from boorus, with text encoder trained for around (number of images times 12) steps.
Textual inversion embedding can be found [here](https://huggingface.co/922-CA/gfl-ddlc-TI-tests) (trained off the same or similar datasets).
Loras (SD 1.5 and SDXL) can be found [here](https://huggingface.co/922-CA/sd-ddlc-tests)
# m2_step_2000.ckpt (~11/2022)
* Fine-tuned on either NAI or Anythingv3
* Keyword: wjkhlhj, monika \(doki doki literature club\)
* Second attempt's best checkpoint at 2000 steps, can be pretty hard to use
* 26 training images
# m2-2340.ckpt (~11/2022)
* Fine-tuned on either NAI or Anythingv3
* Keyword: wjkhlhj, monika \(doki doki literature club\)
* Second attempt, can be pretty hard to use and may overfit to lovecraftian entities
* 26 training images
# tm14.ckpt (~early 2023)
* Fine-tuned on either NAI or Anythingv3
* Keyword: wjkhlhj, monika \(doki doki literature club\)
* ~26 training images
# mo08092023.ckpt
* Fine-tuned on Silicon29
* Keyword: monika \(doki doki literature club\)
* ~17 to 20 training images
# mo08092023a.ckpt
* Fine-tuned on Silicon29 with tweaked parameters
* Keyword: monika \(doki doki literature club\)
* 17 training images
# mo08092023c.ckpt
* Fine-tuned on BreakDomain Anime
* Keyword: wjkhlhj, monika \(doki doki literature club\)
* 17 training images
Previews [here](https://huggingface.co/922-CA/sd-monika-ddlc/tree/main/sd-monika-ddlc-prevs)
Released in celebration of 09222023! |
alexdg19/bert_large_cnn_daily2 | alexdg19 | 2023-11-13T22:57:39Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:alexdg19/bert_large_cnn_daily",
"base_model:finetune:alexdg19/bert_large_cnn_daily",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-13T19:50:23Z | ---
license: mit
base_model: alexdg19/bert_large_cnn_daily
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: bert_large_cnn_daily2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 0.4504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_large_cnn_daily2
This model is a fine-tuned version of [alexdg19/bert_large_cnn_daily](https://huggingface.co/alexdg19/bert_large_cnn_daily) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3008
- Rouge1: 0.4504
- Rouge2: 0.2337
- Rougel: 0.3294
- Rougelsum: 0.424
- Gen Len: 60.2728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.1882 | 1.0 | 1021 | 1.1904 | 0.4379 | 0.223 | 0.318 | 0.41 | 61.3551 |
| 0.9513 | 2.0 | 2042 | 1.1891 | 0.4506 | 0.2353 | 0.3312 | 0.4239 | 59.6771 |
| 0.7581 | 3.0 | 3064 | 1.2440 | 0.4488 | 0.2317 | 0.3273 | 0.4214 | 59.9909 |
| 0.6364 | 4.0 | 4084 | 1.3008 | 0.4504 | 0.2337 | 0.3294 | 0.424 | 60.2728 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
higgsfield/smart-calculator-test | higgsfield | 2023-11-13T22:51:34Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-13T22:44:01Z | ---
{}
---
---
{ card_data }
---
# Model Card for MyCoolModel
This model does this and that.
higgsfield.xyz/model/655296c54813523b6c9f0822
This model was created by [@{ author }](https://hf.co/{author}). |
sal-ops/Reinforce-CartPole | sal-ops | 2023-11-13T22:23:28Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-13T22:23:16Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 485.80 +/- 42.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hamxea/Llama-2-13b-chat-hf-activity-fine-tuned-adapters | hamxea | 2023-11-13T22:17:44Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"region:us"
]
| null | 2023-11-13T21:55:26Z | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
pachaar/bloom-560m-qa-mit | pachaar | 2023-11-13T22:10:05Z | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"region:us"
]
| null | 2023-11-13T22:10:01Z | ---
library_name: peft
base_model: bigscience/bloom-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2.dev0
|
jfischoff/diff_loss_mid_down_blocks | jfischoff | 2023-11-13T22:01:43Z | 0 | 14 | null | [
"region:us"
]
| null | 2023-11-13T19:48:51Z | # `diff_loss_mid_down_blocks`
This is model is a demonstration of an alternative loss function for video training, `diff_loss`. Unlike the typical frame by frame reconstruction loss, `diff_loss` ensures the frame by frame differences of the target and predicted noise are the same.
This model is a fine-tuned version of the AD 1.5 v2 model. It was trainined for 3600 steps with LR of 9e-7 and a batch size of 16. Only the down and mid blocks were trained.
For more details about this model see the associated blog post [here](https://jfischoff.github.io/blog/diff-loss.html)
---
license: openrail
---
|
NobodyExistsOnTheInternet/llama-2-13b-unchat | NobodyExistsOnTheInternet | 2023-11-13T21:54:25Z | 2 | 0 | peft | [
"peft",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2023-11-13T16:47:58Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
lmqg/mt5-base-zhquad-qg-ae | lmqg | 2023-11-13T21:53:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"answer extraction",
"zh",
"dataset:lmqg/qg_zhquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-13T21:42:21Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: zh
datasets:
- lmqg/qg_zhquad
pipeline_tag: text2text-generation
tags:
- question generation
- answer extraction
widget:
- text: "generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
example_title: "Question Generation Example 1"
- text: "generate question: 芝加哥大学的<hl> 1960—61 <hl>集团理论年汇集了Daniel Gorenstein、John G. Thompson和Walter Feit等团体理论家,奠定了一个合作的基础,借助于其他众多数学家的输入,1982中对所有有限的简单群进行了分类。这个项目的规模超过了以往的数学研究,无论是证明的长度还是研究人员的数量。目前正在进行研究,以简化这一分类的证明。如今,群论仍然是一个非常活跃的数学分支,影响着许多其他领域"
example_title: "Question Generation Example 2"
- text: "extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
example_title: "Answer Extraction Example 1"
model-index:
- name: lmqg/mt5-base-zhquad-qg-ae
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_zhquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 14.63
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 34.07
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 23.69
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 76.82
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 57.24
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer
value: 78.4
- name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer
value: 81.92
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer
value: 75.27
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer
value: 53.55
- name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer
value: 55.82
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer
value: 51.56
- name: BLEU4 (Answer Extraction)
type: bleu4_answer_extraction
value: 82.63
- name: ROUGE-L (Answer Extraction)
type: rouge_l_answer_extraction
value: 95.72
- name: METEOR (Answer Extraction)
type: meteor_answer_extraction
value: 71.18
- name: BERTScore (Answer Extraction)
type: bertscore_answer_extraction
value: 99.76
- name: MoverScore (Answer Extraction)
type: moverscore_answer_extraction
value: 98.8
- name: AnswerF1Score (Answer Extraction)
type: answer_f1_score__answer_extraction
value: 95.15
- name: AnswerExactMatch (Answer Extraction)
type: answer_exact_match_answer_extraction
value: 95.07
---
# Model Card of `lmqg/mt5-base-zhquad-qg-ae`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** zh
- **Training data:** [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="zh", model="lmqg/mt5-base-zhquad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-zhquad-qg-ae")
# answer extraction
answer = pipe("generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
# question generation
question = pipe("extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 76.82 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_1 | 36.9 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_2 | 25.74 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_3 | 19.13 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_4 | 14.63 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| METEOR | 23.69 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| MoverScore | 57.24 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| ROUGE_L | 34.07 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_zhquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 78.4 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| QAAlignedF1Score (MoverScore) | 53.55 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| QAAlignedPrecision (BERTScore) | 75.27 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| QAAlignedPrecision (MoverScore) | 51.56 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| QAAlignedRecall (BERTScore) | 81.92 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| QAAlignedRecall (MoverScore) | 55.82 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 95.07 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| AnswerF1Score | 95.15 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| BERTScore | 99.76 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_1 | 92.37 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_2 | 89.37 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_3 | 86.14 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_4 | 82.63 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| METEOR | 71.18 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| MoverScore | 98.8 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| ROUGE_L | 95.72 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_zhquad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: google/mt5-base
- max_length: 512
- max_length_output: 32
- epoch: 5
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
thedavidhackett/roberta-police-mission-statement | thedavidhackett | 2023-11-13T21:47:33Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-03T22:55:42Z | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-police-mission-statement
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-police-mission-statement
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2607
- Accuracy: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 198 | 0.2544 | 0.9006 |
| No log | 2.0 | 396 | 0.1832 | 0.9119 |
| 0.3159 | 3.0 | 594 | 0.2537 | 0.9347 |
| 0.3159 | 4.0 | 792 | 0.1902 | 0.9347 |
| 0.3159 | 5.0 | 990 | 0.2607 | 0.9233 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TheBloke/dolphin-2_2-yi-34b-AWQ | TheBloke | 2023-11-13T21:38:30Z | 15 | 12 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"base_model:cognitivecomputations/dolphin-2_2-yi-34b",
"base_model:quantized:cognitivecomputations/dolphin-2_2-yi-34b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-13T20:01:36Z | ---
base_model: ehartford/dolphin-2_2-yi-34b
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
inference: false
language:
- en
license: other
license_link: LICENSE
license_name: yi-license
model_creator: Eric Hartford
model_name: Dolphin 2.2 Yi 34B
model_type: yi
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin 2.2 Yi 34B - AWQ
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Dolphin 2.2 Yi 34B](https://huggingface.co/ehartford/dolphin-2_2-yi-34b)
<!-- description start -->
## Description
This repo contains AWQ model files for [Eric Hartford's Dolphin 2.2 Yi 34B](https://huggingface.co/ehartford/dolphin-2_2-yi-34b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-2_2-yi-34b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 19.23 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/dolphin-2_2-yi-34b-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `dolphin-2_2-yi-34b-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/dolphin-2_2-yi-34b-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/dolphin-2_2-yi-34b-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/dolphin-2_2-yi-34b-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/dolphin-2_2-yi-34b-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Eric Hartford's Dolphin 2.2 Yi 34B
Dolphin 2.2 🐬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2-Yi-34b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on Yi, and is subject to Yi license.
I used the llama compatible [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) as the base model.
Trained with 16k context.
You can load it as follows:
```
from transformers import LlamaForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ehartford/dolphin-2.2-yi-34b", trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained("ehartford/dolphin-2.2-yi-34b")
```
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
## Training
It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Thank you to Charles Goddard, for publishing llama-compatible Yi
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
Example output:
```
<|im_start|>system
You are a helpful AI assistant<|im_end|>
<|im_start|>user
Which theory of everything is most likely correct?<|im_end|>
<|im_start|>assistant
The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
|
Asubramanian19/fastai-dove-species-classifier | Asubramanian19 | 2023-11-13T21:35:38Z | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
]
| null | 2023-11-13T21:23:48Z | ---
license: mit
language:
- en
---
Resnet50 model trained on images of rock doves, zebra doves and mourning doves.
Images are from the well curated dataset of bird pictures at https://www.kaggle.com/datasets/gpiosenka/100-bird-species
This exercise is to apply the lesson taught as part of the course:
https://github.com/fastai/fastbook/blob/master/02_production.ipynb |
sal-ops/dqn-SpaceInvadersNoFrameskip-v4 | sal-ops | 2023-11-13T21:34:50Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-13T21:34:20Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 29.00 +/- 64.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sal-ops -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sal-ops -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sal-ops
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
regisss/falcon-40b-lora | regisss | 2023-11-13T21:24:44Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:tiiuae/falcon-40b",
"base_model:finetune:tiiuae/falcon-40b",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-13T21:07:04Z | ---
license: apache-2.0
base_model: tiiuae/falcon-40b
tags:
- generated_from_trainer
model-index:
- name: falcon-40b-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-40b-lora
This model is a fine-tuned version of [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 5
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dadashzadeh/test-trainer-persian | dadashzadeh | 2023-11-13T21:21:52Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"fa",
"en",
"base_model:sileod/mdeberta-v3-base-tasksource-nli",
"base_model:finetune:sileod/mdeberta-v3-base-tasksource-nli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-13T20:21:33Z | ---
license: apache-2.0
base_model: sileod/mdeberta-v3-base-tasksource-nli
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-trainer-persian
results: []
language:
- fa
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer-persian
This model is a fine-tuned version of [sileod/mdeberta-v3-base-tasksource-nli](https://huggingface.co/sileod/mdeberta-v3-base-tasksource-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1753
- Accuracy: 0.9463
{'قانون': 3241,
'ادبیات': 18297,
'دارو': 708,
'شعر': 256,
'سیاست': 5720,
'دین': 911,
'علم': 4546,
'گفتگو': 2882}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.514 | 1.0 | 1708 | 0.3680 | 0.8829 |
| 0.3639 | 2.0 | 3416 | 0.2282 | 0.9263 |
| 0.2534 | 3.0 | 5124 | 0.1753 | 0.9463 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1 |
Santp98/albert-base-spanish-2023-11-13-19-24 | Santp98 | 2023-11-13T20:55:04Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/albert-base-spanish",
"base_model:finetune:dccuchile/albert-base-spanish",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-13T19:24:55Z | ---
base_model: dccuchile/albert-base-spanish
tags:
- generated_from_trainer
model-index:
- name: albert-base-spanish-2023-11-13-19-24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-spanish-2023-11-13-19-24
This model is a fine-tuned version of [dccuchile/albert-base-spanish](https://huggingface.co/dccuchile/albert-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9328 | 0.59 | 500 | 1.9145 |
| 1.863 | 1.19 | 1000 | 1.7882 |
| 1.7407 | 1.78 | 1500 | 1.7241 |
| 1.648 | 2.38 | 2000 | 1.6651 |
| 1.606 | 2.97 | 2500 | 1.6102 |
| 1.5833 | 3.56 | 3000 | 1.5912 |
| 1.5663 | 4.16 | 3500 | 1.5642 |
| 1.5104 | 4.75 | 4000 | 1.5390 |
| 1.5252 | 5.34 | 4500 | 1.5197 |
| 1.4676 | 5.94 | 5000 | 1.4950 |
| 1.4502 | 6.53 | 5500 | 1.4766 |
| 1.4336 | 7.13 | 6000 | 1.4694 |
| 1.4355 | 7.72 | 6500 | 1.4527 |
| 1.457 | 8.31 | 7000 | 1.4403 |
| 1.4219 | 8.91 | 7500 | 1.4380 |
| 1.4503 | 9.5 | 8000 | 1.4313 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Word2vec/nlpl_82 | Word2vec | 2023-11-13T20:54:19Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:ENC3_English_Common_Crawl_Corpus",
"license:cc-by-4.0",
"region:us"
]
| null | 2023-07-04T13:28:40Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: ENC3_English_Common_Crawl_Corpus
---
## Information
A word2vec model trained by Kjetil Bugge Kristoffersen ([email protected]) on a vocabulary of size 2000000 corresponding to 135159000000 tokens from the dataset `ENC3:_English_Common_Crawl_Corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Global Vectors with window of 10 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_82", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/82.zip |
siulhin-vlad37/chatbot | siulhin-vlad37 | 2023-11-13T20:53:08Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-11-13T20:47:04Z | ---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
facebook/dpt-dinov2-base-kitti | facebook | 2023-11-13T20:41:55Z | 1,581 | 2 | transformers | [
"transformers",
"pytorch",
"dpt",
"depth-estimation",
"vision",
"dinov2",
"arxiv:2304.07193",
"arxiv:2103.13413",
"license:apache-2.0",
"region:us"
]
| depth-estimation | 2023-10-31T17:55:24Z | ---
license: apache-2.0
tags:
- vision
- depth-estimation
- dinov2
inference: false
---
# Model Card: DPT model with DINOv2 backbone
## Model Details
DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg"
alt="drawing" width="600"/>
<small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small>
### Resources
- [DINOv2 Paper](https://arxiv.org/abs/2304.07193)
- [DPT Paper](https://arxiv.org/abs/2103.13413)
### Use with Transformers
```python
from transformers import AutoImageProcessor, DPTForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-base-kitti")
model = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-base-kitti")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
## Model Use
### Intended Use
The model is intended to showcase that using the DPT framework with DINOv2 as backbone yields a powerful depth estimator.
### BibTeX entry and citation info
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
facebook/dpt-dinov2-giant-kitti | facebook | 2023-11-13T20:41:31Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"dpt",
"depth-estimation",
"vision",
"dinov2",
"arxiv:2304.07193",
"arxiv:2103.13413",
"license:apache-2.0",
"region:us"
]
| depth-estimation | 2023-11-01T16:22:39Z | ---
license: apache-2.0
tags:
- vision
- dinov2
- depth-estimation
inference: false
---
# Model Card: DPT model with DINOv2 backbone
## Model Details
DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg"
alt="drawing" width="600"/>
<small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small>
### Resources
- [DINOv2 Paper](https://arxiv.org/abs/2304.07193)
- [DPT Paper](https://arxiv.org/abs/2103.13413)
### Use with Transformers
```python
from transformers import AutoImageProcessor, DPTForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-giant-kitti")
model = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-giant-kitti")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
## Model Use
### Intended Use
The model is intended to showcase that using the DPT framework with DINOv2 as backbone yields a powerful depth estimator.
### BibTeX entry and citation info
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
CyberRift/sidekick | CyberRift | 2023-11-13T20:39:04Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"llm",
"large language model",
"en",
"dataset:OpenAssistant/oasst1",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-21T00:47:53Z | ---
language:
- en
library_name: transformers
tags:
- llm
- large language model
inference: false
thumbnail: null
datasets:
- OpenAssistant/oasst1
---
# Model Card
## Summary
- Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.34.0
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCES_TOKEN>)
```
- Or directly pass your <ACCES_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="CyberRift/sidekick",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
Why is drinking water so healthy?<|endoftext|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"ahmedabt/sidekick",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"ahmedabt/sidekick",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ahmedabt/sidekick" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?<|endoftext|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2048)
(emb_dropout): Dropout(p=0.0, inplace=False)
(layers): ModuleList(
(0-23): 24 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
(post_attention_dropout): Dropout(p=0.0, inplace=False)
(post_mlp_dropout): Dropout(p=0.0, inplace=False)
(attention): GPTNeoXAttention(
(rotary_emb): GPTNeoXRotaryEmbedding()
(query_key_value): Linear(in_features=2048, out_features=6144, bias=True)
(dense): Linear(in_features=2048, out_features=2048, bias=True)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2048, out_features=8192, bias=True)
(dense_4h_to_h): Linear(in_features=8192, out_features=2048, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2048, out_features=50304, bias=False)
)
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
imann63/sce_2-sites_imgs-42_steps-4200_lr-5e6_regularization-none_model-base_diffuser | imann63 | 2023-11-13T20:38:41Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-10T00:27:53Z |
---
license: creativeml-openrail-m
base_model: models/diffuser
instance_prompt: photo of a special powerline station
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - imann63/sce_2-sites_imgs-42_steps-4200_lr-5e6_regularization-none_model-base_diffuser
This is a dreambooth model derived from models/diffuser. The weights were trained on photo of a special powerline station using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
onangeko/dqn-SpaceInvadersNoFrameskip-v4 | onangeko | 2023-11-13T20:35:39Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-13T20:34:55Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 773.00 +/- 303.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga onangeko -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga onangeko -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga onangeko
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
hamxea/Llama-2-7b-chat-hf-fine-tuned-adapters | hamxea | 2023-11-13T20:27:51Z | 1 | 0 | peft | [
"peft",
"pytorch",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-11-13T19:52:33Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
feynman-integrals-nn/heavycrossbox | feynman-integrals-nn | 2023-11-13T20:26:03Z | 6 | 0 | transformers | [
"transformers",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-11-02T18:59:26Z | ---
license: cc-by-4.0
---
# heavycrossbox
* [model](https://huggingface.co/feynman-integrals-nn/heavycrossbox)
* [data](https://huggingface.co/datasets/feynman-integrals-nn/heavycrossbox)
* [source](https://gitlab.com/feynman-integrals-nn/feynman-integrals-nn/-/tree/main/heavycrossbox)
|
Danielbrdz/Barcenas-6b-200k | Danielbrdz | 2023-11-13T20:24:38Z | 1,494 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-13T19:27:47Z | ---
license: other
license_name: yi-license
license_link: LICENSE
---
Barcenas 6b 200k
It is a larryvrh/Yi-6B-200K-Llamafied fine-tuning model.
The HuggingFaceH4/no_robots dataset was used as training.
An Nvidia A100 was used for all the training of this model.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 |
ash100/llm-upload-test | ash100 | 2023-11-13T20:15:02Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-13T20:14:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
zkdeng/resnet-50-finetuned-combinedSpiders | zkdeng | 2023-11-13T20:09:25Z | 41 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T20:09:20Z | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
model-index:
- name: resnet-50-finetuned-combinedSpiders
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-combinedSpiders
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3794
- eval_accuracy: 0.8996
- eval_precision: 0.8983
- eval_recall: 0.8934
- eval_f1: 0.8943
- eval_runtime: 14.9052
- eval_samples_per_second: 181.145
- eval_steps_per_second: 11.338
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
chengyineng/bloom_prompt_tuning_1699905695.106345 | chengyineng | 2023-11-13T20:01:38Z | 1 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-11-13T20:01:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Akshay0706/Rice-Plant-50-Epochs-Model | Akshay0706 | 2023-11-13T19:58:50Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:58:23Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: Rice-Plant-50-Epochs-Model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9688473520249221
- name: F1
type: f1
value: 0.9686087085518211
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Rice-Plant-50-Epochs-Model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
- Accuracy: 0.9688
- F1: 0.9686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0399 | 1.0 | 115 | 0.6185 | 0.8910 | 0.8933 |
| 0.3392 | 2.0 | 230 | 0.2849 | 0.9502 | 0.9497 |
| 0.1633 | 3.0 | 345 | 0.2230 | 0.9439 | 0.9440 |
| 0.104 | 4.0 | 460 | 0.2022 | 0.9502 | 0.9495 |
| 0.0828 | 5.0 | 575 | 0.2081 | 0.9408 | 0.9406 |
| 0.0603 | 6.0 | 690 | 0.2301 | 0.9408 | 0.9403 |
| 0.0513 | 7.0 | 805 | 0.1704 | 0.9595 | 0.9593 |
| 0.042 | 8.0 | 920 | 0.1587 | 0.9626 | 0.9626 |
| 0.0356 | 9.0 | 1035 | 0.1606 | 0.9626 | 0.9625 |
| 0.0299 | 10.0 | 1150 | 0.1608 | 0.9657 | 0.9656 |
| 0.0262 | 11.0 | 1265 | 0.1553 | 0.9626 | 0.9625 |
| 0.0232 | 12.0 | 1380 | 0.1582 | 0.9657 | 0.9656 |
| 0.0207 | 13.0 | 1495 | 0.1588 | 0.9657 | 0.9656 |
| 0.0186 | 14.0 | 1610 | 0.1618 | 0.9657 | 0.9656 |
| 0.0168 | 15.0 | 1725 | 0.1618 | 0.9657 | 0.9656 |
| 0.0152 | 16.0 | 1840 | 0.1639 | 0.9657 | 0.9656 |
| 0.0139 | 17.0 | 1955 | 0.1649 | 0.9688 | 0.9686 |
| 0.0127 | 18.0 | 2070 | 0.1676 | 0.9657 | 0.9656 |
| 0.0117 | 19.0 | 2185 | 0.1688 | 0.9688 | 0.9686 |
| 0.0108 | 20.0 | 2300 | 0.1710 | 0.9626 | 0.9622 |
| 0.01 | 21.0 | 2415 | 0.1723 | 0.9657 | 0.9654 |
| 0.0093 | 22.0 | 2530 | 0.1739 | 0.9657 | 0.9654 |
| 0.0087 | 23.0 | 2645 | 0.1758 | 0.9626 | 0.9622 |
| 0.0081 | 24.0 | 2760 | 0.1776 | 0.9626 | 0.9622 |
| 0.0076 | 25.0 | 2875 | 0.1777 | 0.9657 | 0.9654 |
| 0.0071 | 26.0 | 2990 | 0.1792 | 0.9657 | 0.9654 |
| 0.0067 | 27.0 | 3105 | 0.1808 | 0.9657 | 0.9654 |
| 0.0063 | 28.0 | 3220 | 0.1822 | 0.9657 | 0.9654 |
| 0.006 | 29.0 | 3335 | 0.1834 | 0.9657 | 0.9654 |
| 0.0057 | 30.0 | 3450 | 0.1840 | 0.9657 | 0.9654 |
| 0.0054 | 31.0 | 3565 | 0.1855 | 0.9657 | 0.9654 |
| 0.0051 | 32.0 | 3680 | 0.1868 | 0.9657 | 0.9654 |
| 0.0049 | 33.0 | 3795 | 0.1877 | 0.9657 | 0.9654 |
| 0.0047 | 34.0 | 3910 | 0.1892 | 0.9657 | 0.9654 |
| 0.0045 | 35.0 | 4025 | 0.1900 | 0.9657 | 0.9654 |
| 0.0043 | 36.0 | 4140 | 0.1914 | 0.9657 | 0.9654 |
| 0.0042 | 37.0 | 4255 | 0.1919 | 0.9657 | 0.9654 |
| 0.004 | 38.0 | 4370 | 0.1929 | 0.9657 | 0.9654 |
| 0.0039 | 39.0 | 4485 | 0.1938 | 0.9657 | 0.9654 |
| 0.0037 | 40.0 | 4600 | 0.1953 | 0.9657 | 0.9654 |
| 0.0036 | 41.0 | 4715 | 0.1956 | 0.9657 | 0.9654 |
| 0.0035 | 42.0 | 4830 | 0.1965 | 0.9657 | 0.9654 |
| 0.0035 | 43.0 | 4945 | 0.1974 | 0.9657 | 0.9654 |
| 0.0034 | 44.0 | 5060 | 0.1981 | 0.9657 | 0.9654 |
| 0.0033 | 45.0 | 5175 | 0.1984 | 0.9657 | 0.9654 |
| 0.0032 | 46.0 | 5290 | 0.1986 | 0.9657 | 0.9654 |
| 0.0032 | 47.0 | 5405 | 0.1989 | 0.9657 | 0.9654 |
| 0.0032 | 48.0 | 5520 | 0.1993 | 0.9657 | 0.9654 |
| 0.0031 | 49.0 | 5635 | 0.1993 | 0.9657 | 0.9654 |
| 0.0031 | 50.0 | 5750 | 0.1993 | 0.9657 | 0.9654 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
kariver/conditional-detr-resnet-50_adamw_hf_finetuned_food-roboflow | kariver | 2023-11-13T19:55:21Z | 27 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-11-09T18:51:59Z | ---
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: conditional-detr-resnet-50_adamw_hf_finetuned_food-roboflow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conditional-detr-resnet-50_adamw_hf_finetuned_food-roboflow
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4448.7409 | 0.77 | 50 | 2784.9878 |
| 2312.7831 | 1.54 | 100 | 1275.4084 |
| 1139.5262 | 2.31 | 150 | 825.0410 |
| 857.6314 | 3.08 | 200 | 626.2924 |
| 630.5604 | 3.85 | 250 | 492.9028 |
| 518.8461 | 4.62 | 300 | 391.5704 |
| 418.3785 | 5.38 | 350 | 312.8008 |
| 331.9224 | 6.15 | 400 | 250.7763 |
| 261.2112 | 6.92 | 450 | 201.8681 |
| 207.421 | 7.69 | 500 | 163.3361 |
| 172.0157 | 8.46 | 550 | 132.3763 |
| 140.2059 | 9.23 | 600 | 108.1960 |
| 110.3696 | 10.0 | 650 | 89.2497 |
| 95.9428 | 10.77 | 700 | 73.2426 |
| 78.812 | 11.54 | 750 | 61.4362 |
| 64.1493 | 12.31 | 800 | 51.2599 |
| 56.184 | 13.08 | 850 | 43.4741 |
| 46.7644 | 13.85 | 900 | 37.4028 |
| 38.1726 | 14.62 | 950 | 32.6764 |
| 34.7277 | 15.38 | 1000 | 28.1207 |
| 29.9978 | 16.15 | 1050 | 25.0045 |
| 27.5957 | 16.92 | 1100 | 22.3012 |
| 23.6549 | 17.69 | 1150 | 19.9766 |
| 21.4961 | 18.46 | 1200 | 18.2427 |
| 19.3312 | 19.23 | 1250 | 16.8829 |
| 17.8215 | 20.0 | 1300 | 15.5794 |
| 16.2877 | 20.77 | 1350 | 14.5258 |
| 16.0017 | 21.54 | 1400 | 13.5888 |
| 14.6977 | 22.31 | 1450 | 13.0312 |
| 13.9111 | 23.08 | 1500 | 12.4389 |
| 13.5826 | 23.85 | 1550 | 11.8718 |
| 12.5621 | 24.62 | 1600 | 11.6370 |
| 12.1993 | 25.38 | 1650 | 11.1523 |
| 12.2491 | 26.15 | 1700 | 10.8586 |
| 11.5879 | 26.92 | 1750 | 10.6795 |
| 11.4736 | 27.69 | 1800 | 10.5273 |
| 11.5405 | 28.46 | 1850 | 10.4190 |
| 11.5894 | 29.23 | 1900 | 10.3225 |
| 11.0308 | 30.0 | 1950 | 10.2661 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
itspat/RemmMistral13b | itspat | 2023-11-13T19:52:34Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-13T19:36:08Z | ---
license: cc-by-nc-4.0
---
Re:MythoMax-Mistral (ReMM-Mistral) is a recreation trial of the original [MythoMax-L2-B13](https://huggingface.co/Gryphe/MythoMax-L2-13b) with updated models and data of Mistral.
This merge use Gradient merging method to merge ReML-Mistral v2.2 and Huginn.
Explaination :
```shell
- ReML-Mistral-v2.2: (Chronos-Beluga v2/Hermes/Airboros 2.2.1 + LoRA)
=> The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
=> jondurbin/airoboros-l2-13b-2.2 by jondurbin/airoboros-l2-13b-2.2.1
=> NousResearch/Nous-Hermes-Llama2-13b
=> Applying Undi95/llama2-to-mistral-diff at 1.0 at the end
With that :
- ReMM-Mistral: (ReML-Mistral-v2.2/Huginn)
=> ReMM by the one above
=> The-Face-Of-Goonery/Huginn-13b-FP16
```
<!-- description start -->
## Description
This repo contains fp16 files of ReMM-Mistral, a recreation of the original MythoMax, but updated and merged with Gradient method and Mistral data.
<!-- description end -->
<!-- description start -->
## Models used
- [The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16=)
- [jondurbin/airoboros-l2-13b-2.2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2.1)
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [The-Face-Of-Goonery/Huginn-13b-FP16](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16)
- [Undi95/ReML-Mistral-v2.2-13B](https://huggingface.co/Undi95/ReML-Mistral-v2.2-13B) (Private recreation trial of an updated Mythologic-L2-13B with Mistral data)
- [Undi95/llama2-to-mistral-diff](https://huggingface.co/Undi95/llama2-to-mistral-diff)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
A big thanks to [KoboldAI](https://github.com/KoboldAI/KoboldAI-Client) dev' Henky for giving me a machine powerfull enough to do some of my work!
If you want to support me, you can [here](https://ko-fi.com/undiai). |
hkivancoral/hushem_1x_deit_small_rms_001_fold3 | hkivancoral | 2023-11-13T19:42:39Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:39:27Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_rms_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.27906976744186046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4032
- Accuracy: 0.2791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 7.8981 | 0.2558 |
| 4.5842 | 2.0 | 12 | 2.6640 | 0.2558 |
| 4.5842 | 3.0 | 18 | 1.6613 | 0.2558 |
| 1.9697 | 4.0 | 24 | 1.9731 | 0.2558 |
| 1.667 | 5.0 | 30 | 1.6222 | 0.2558 |
| 1.667 | 6.0 | 36 | 1.5650 | 0.2558 |
| 1.5493 | 7.0 | 42 | 1.7126 | 0.2326 |
| 1.5493 | 8.0 | 48 | 1.7198 | 0.2558 |
| 1.5158 | 9.0 | 54 | 1.4567 | 0.2558 |
| 1.4878 | 10.0 | 60 | 1.4210 | 0.2558 |
| 1.4878 | 11.0 | 66 | 1.4952 | 0.2558 |
| 1.4957 | 12.0 | 72 | 1.3917 | 0.2558 |
| 1.4957 | 13.0 | 78 | 1.4819 | 0.2558 |
| 1.4611 | 14.0 | 84 | 1.4450 | 0.2558 |
| 1.4163 | 15.0 | 90 | 1.4214 | 0.2558 |
| 1.4163 | 16.0 | 96 | 1.4612 | 0.2326 |
| 1.4195 | 17.0 | 102 | 1.4036 | 0.2558 |
| 1.4195 | 18.0 | 108 | 1.5088 | 0.2558 |
| 1.4297 | 19.0 | 114 | 1.4009 | 0.2558 |
| 1.4404 | 20.0 | 120 | 1.3994 | 0.2558 |
| 1.4404 | 21.0 | 126 | 1.4434 | 0.2326 |
| 1.4296 | 22.0 | 132 | 1.4098 | 0.2326 |
| 1.4296 | 23.0 | 138 | 1.4135 | 0.2558 |
| 1.4052 | 24.0 | 144 | 1.4035 | 0.3023 |
| 1.3982 | 25.0 | 150 | 1.3795 | 0.3023 |
| 1.3982 | 26.0 | 156 | 1.3917 | 0.2558 |
| 1.358 | 27.0 | 162 | 1.5442 | 0.2326 |
| 1.358 | 28.0 | 168 | 1.3715 | 0.3256 |
| 1.3753 | 29.0 | 174 | 1.4626 | 0.2791 |
| 1.3737 | 30.0 | 180 | 1.4033 | 0.3023 |
| 1.3737 | 31.0 | 186 | 1.4221 | 0.3488 |
| 1.2553 | 32.0 | 192 | 1.5495 | 0.2791 |
| 1.2553 | 33.0 | 198 | 1.4332 | 0.2791 |
| 1.2089 | 34.0 | 204 | 1.4065 | 0.2791 |
| 1.2158 | 35.0 | 210 | 1.4613 | 0.2791 |
| 1.2158 | 36.0 | 216 | 1.4360 | 0.3256 |
| 1.1733 | 37.0 | 222 | 1.4966 | 0.3256 |
| 1.1733 | 38.0 | 228 | 1.4024 | 0.2791 |
| 1.1359 | 39.0 | 234 | 1.3752 | 0.2791 |
| 1.1239 | 40.0 | 240 | 1.4121 | 0.3023 |
| 1.1239 | 41.0 | 246 | 1.4047 | 0.2791 |
| 1.0932 | 42.0 | 252 | 1.4032 | 0.2791 |
| 1.0932 | 43.0 | 258 | 1.4032 | 0.2791 |
| 1.0875 | 44.0 | 264 | 1.4032 | 0.2791 |
| 1.102 | 45.0 | 270 | 1.4032 | 0.2791 |
| 1.102 | 46.0 | 276 | 1.4032 | 0.2791 |
| 1.0783 | 47.0 | 282 | 1.4032 | 0.2791 |
| 1.0783 | 48.0 | 288 | 1.4032 | 0.2791 |
| 1.1264 | 49.0 | 294 | 1.4032 | 0.2791 |
| 1.0785 | 50.0 | 300 | 1.4032 | 0.2791 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_1x_deit_small_rms_001_fold2 | hkivancoral | 2023-11-13T19:39:12Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:36:00Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_rms_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2524
- Accuracy: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 7.5665 | 0.2444 |
| 4.6598 | 2.0 | 12 | 1.8034 | 0.2444 |
| 4.6598 | 3.0 | 18 | 1.7719 | 0.2444 |
| 1.754 | 4.0 | 24 | 1.5619 | 0.2667 |
| 1.5561 | 5.0 | 30 | 1.5155 | 0.2444 |
| 1.5561 | 6.0 | 36 | 1.5905 | 0.2444 |
| 1.5161 | 7.0 | 42 | 1.4606 | 0.2444 |
| 1.5161 | 8.0 | 48 | 1.5057 | 0.2667 |
| 1.4837 | 9.0 | 54 | 1.4997 | 0.2444 |
| 1.456 | 10.0 | 60 | 1.4411 | 0.2444 |
| 1.456 | 11.0 | 66 | 1.4980 | 0.2667 |
| 1.4256 | 12.0 | 72 | 1.4097 | 0.2444 |
| 1.4256 | 13.0 | 78 | 1.4518 | 0.2667 |
| 1.4488 | 14.0 | 84 | 1.3937 | 0.2667 |
| 1.4354 | 15.0 | 90 | 1.4044 | 0.2444 |
| 1.4354 | 16.0 | 96 | 1.3767 | 0.2667 |
| 1.4383 | 17.0 | 102 | 1.4222 | 0.2444 |
| 1.4383 | 18.0 | 108 | 1.4806 | 0.2444 |
| 1.4107 | 19.0 | 114 | 1.4789 | 0.2444 |
| 1.3761 | 20.0 | 120 | 1.2485 | 0.4444 |
| 1.3761 | 21.0 | 126 | 1.3600 | 0.2667 |
| 1.3385 | 22.0 | 132 | 1.4500 | 0.4 |
| 1.3385 | 23.0 | 138 | 1.3814 | 0.3778 |
| 1.3465 | 24.0 | 144 | 1.4692 | 0.2667 |
| 1.323 | 25.0 | 150 | 1.1674 | 0.4667 |
| 1.323 | 26.0 | 156 | 1.3636 | 0.2889 |
| 1.2871 | 27.0 | 162 | 1.3963 | 0.4 |
| 1.2871 | 28.0 | 168 | 1.3023 | 0.4444 |
| 1.1938 | 29.0 | 174 | 1.2034 | 0.4222 |
| 1.2252 | 30.0 | 180 | 1.2237 | 0.4444 |
| 1.2252 | 31.0 | 186 | 1.2906 | 0.4 |
| 1.2127 | 32.0 | 192 | 1.2853 | 0.4 |
| 1.2127 | 33.0 | 198 | 1.3006 | 0.3556 |
| 1.131 | 34.0 | 204 | 1.3803 | 0.2889 |
| 1.1689 | 35.0 | 210 | 1.2981 | 0.3556 |
| 1.1689 | 36.0 | 216 | 1.4728 | 0.2889 |
| 1.1285 | 37.0 | 222 | 1.3455 | 0.3333 |
| 1.1285 | 38.0 | 228 | 1.2593 | 0.4 |
| 1.0174 | 39.0 | 234 | 1.2539 | 0.3556 |
| 1.0651 | 40.0 | 240 | 1.2296 | 0.4 |
| 1.0651 | 41.0 | 246 | 1.2510 | 0.3778 |
| 1.0297 | 42.0 | 252 | 1.2524 | 0.4 |
| 1.0297 | 43.0 | 258 | 1.2524 | 0.4 |
| 0.9982 | 44.0 | 264 | 1.2524 | 0.4 |
| 1.047 | 45.0 | 270 | 1.2524 | 0.4 |
| 1.047 | 46.0 | 276 | 1.2524 | 0.4 |
| 0.9969 | 47.0 | 282 | 1.2524 | 0.4 |
| 0.9969 | 48.0 | 288 | 1.2524 | 0.4 |
| 1.0686 | 49.0 | 294 | 1.2524 | 0.4 |
| 1.0034 | 50.0 | 300 | 1.2524 | 0.4 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sally9805/saved_model_pp_low_lr | sally9805 | 2023-11-13T19:36:05Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-13T18:56:48Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - sally9805/saved_model_pp_low_lr
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
hkivancoral/hushem_1x_deit_small_rms_001_fold1 | hkivancoral | 2023-11-13T19:35:43Z | 17 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:32:29Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_rms_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.35555555555555557
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5587
- Accuracy: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 5.6616 | 0.2444 |
| 4.5403 | 2.0 | 12 | 1.9139 | 0.2444 |
| 4.5403 | 3.0 | 18 | 1.7372 | 0.2444 |
| 1.8724 | 4.0 | 24 | 1.4323 | 0.2667 |
| 1.5505 | 5.0 | 30 | 1.5541 | 0.2444 |
| 1.5505 | 6.0 | 36 | 1.5305 | 0.2444 |
| 1.4992 | 7.0 | 42 | 1.5286 | 0.2444 |
| 1.4992 | 8.0 | 48 | 1.5617 | 0.2444 |
| 1.4899 | 9.0 | 54 | 1.4717 | 0.2444 |
| 1.4501 | 10.0 | 60 | 1.4440 | 0.2444 |
| 1.4501 | 11.0 | 66 | 1.4155 | 0.2667 |
| 1.4052 | 12.0 | 72 | 1.3606 | 0.2444 |
| 1.4052 | 13.0 | 78 | 1.4215 | 0.3333 |
| 1.4555 | 14.0 | 84 | 1.3356 | 0.3333 |
| 1.4209 | 15.0 | 90 | 1.4688 | 0.2667 |
| 1.4209 | 16.0 | 96 | 1.2956 | 0.4444 |
| 1.4079 | 17.0 | 102 | 1.4012 | 0.2444 |
| 1.4079 | 18.0 | 108 | 1.4817 | 0.2444 |
| 1.4101 | 19.0 | 114 | 1.4296 | 0.2667 |
| 1.6129 | 20.0 | 120 | 1.5601 | 0.2444 |
| 1.6129 | 21.0 | 126 | 1.8216 | 0.2667 |
| 1.5349 | 22.0 | 132 | 1.6109 | 0.2667 |
| 1.5349 | 23.0 | 138 | 1.6663 | 0.2444 |
| 1.4443 | 24.0 | 144 | 1.4166 | 0.2444 |
| 1.3949 | 25.0 | 150 | 1.5159 | 0.2444 |
| 1.3949 | 26.0 | 156 | 1.5557 | 0.2444 |
| 1.2549 | 27.0 | 162 | 1.2710 | 0.3333 |
| 1.2549 | 28.0 | 168 | 1.4661 | 0.3333 |
| 1.2756 | 29.0 | 174 | 1.3759 | 0.3111 |
| 1.2244 | 30.0 | 180 | 1.3243 | 0.4222 |
| 1.2244 | 31.0 | 186 | 1.1877 | 0.4222 |
| 1.1482 | 32.0 | 192 | 1.1943 | 0.4667 |
| 1.1482 | 33.0 | 198 | 1.3644 | 0.3111 |
| 1.0904 | 34.0 | 204 | 1.3812 | 0.3778 |
| 1.051 | 35.0 | 210 | 1.3131 | 0.4444 |
| 1.051 | 36.0 | 216 | 1.7518 | 0.2667 |
| 1.0583 | 37.0 | 222 | 1.8440 | 0.3556 |
| 1.0583 | 38.0 | 228 | 1.7450 | 0.2889 |
| 0.8766 | 39.0 | 234 | 1.5767 | 0.3556 |
| 0.9084 | 40.0 | 240 | 1.5052 | 0.3778 |
| 0.9084 | 41.0 | 246 | 1.5534 | 0.3556 |
| 0.8553 | 42.0 | 252 | 1.5587 | 0.3556 |
| 0.8553 | 43.0 | 258 | 1.5587 | 0.3556 |
| 0.8404 | 44.0 | 264 | 1.5587 | 0.3556 |
| 0.8432 | 45.0 | 270 | 1.5587 | 0.3556 |
| 0.8432 | 46.0 | 276 | 1.5587 | 0.3556 |
| 0.8133 | 47.0 | 282 | 1.5587 | 0.3556 |
| 0.8133 | 48.0 | 288 | 1.5587 | 0.3556 |
| 0.8467 | 49.0 | 294 | 1.5587 | 0.3556 |
| 0.8396 | 50.0 | 300 | 1.5587 | 0.3556 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Ekkologico/Llama-2-7b-chat-python_code_instructions_18k_alpaca | Ekkologico | 2023-11-13T19:33:24Z | 1 | 1 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-11-13T16:03:02Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2.dev0
|
hkivancoral/hushem_1x_deit_small_sgd_00001_fold3 | hkivancoral | 2023-11-13T19:21:54Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:19:13Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_sgd_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.18604651162790697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5939
- Accuracy: 0.1860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.6009 | 0.1860 |
| 1.5096 | 2.0 | 12 | 1.6005 | 0.1860 |
| 1.5096 | 3.0 | 18 | 1.6002 | 0.1860 |
| 1.4896 | 4.0 | 24 | 1.5998 | 0.1860 |
| 1.4946 | 5.0 | 30 | 1.5995 | 0.1860 |
| 1.4946 | 6.0 | 36 | 1.5992 | 0.1860 |
| 1.508 | 7.0 | 42 | 1.5988 | 0.1860 |
| 1.508 | 8.0 | 48 | 1.5986 | 0.1860 |
| 1.4945 | 9.0 | 54 | 1.5983 | 0.1860 |
| 1.5205 | 10.0 | 60 | 1.5980 | 0.1860 |
| 1.5205 | 11.0 | 66 | 1.5977 | 0.1860 |
| 1.5058 | 12.0 | 72 | 1.5975 | 0.1860 |
| 1.5058 | 13.0 | 78 | 1.5972 | 0.1860 |
| 1.5082 | 14.0 | 84 | 1.5970 | 0.1860 |
| 1.502 | 15.0 | 90 | 1.5967 | 0.1860 |
| 1.502 | 16.0 | 96 | 1.5965 | 0.1860 |
| 1.5281 | 17.0 | 102 | 1.5963 | 0.1860 |
| 1.5281 | 18.0 | 108 | 1.5961 | 0.1860 |
| 1.4713 | 19.0 | 114 | 1.5959 | 0.1860 |
| 1.5067 | 20.0 | 120 | 1.5957 | 0.1860 |
| 1.5067 | 21.0 | 126 | 1.5955 | 0.1860 |
| 1.5046 | 22.0 | 132 | 1.5953 | 0.1860 |
| 1.5046 | 23.0 | 138 | 1.5952 | 0.1860 |
| 1.4884 | 24.0 | 144 | 1.5950 | 0.1860 |
| 1.4923 | 25.0 | 150 | 1.5949 | 0.1860 |
| 1.4923 | 26.0 | 156 | 1.5948 | 0.1860 |
| 1.4973 | 27.0 | 162 | 1.5947 | 0.1860 |
| 1.4973 | 28.0 | 168 | 1.5945 | 0.1860 |
| 1.5002 | 29.0 | 174 | 1.5945 | 0.1860 |
| 1.4807 | 30.0 | 180 | 1.5944 | 0.1860 |
| 1.4807 | 31.0 | 186 | 1.5943 | 0.1860 |
| 1.486 | 32.0 | 192 | 1.5942 | 0.1860 |
| 1.486 | 33.0 | 198 | 1.5941 | 0.1860 |
| 1.4927 | 34.0 | 204 | 1.5941 | 0.1860 |
| 1.4875 | 35.0 | 210 | 1.5940 | 0.1860 |
| 1.4875 | 36.0 | 216 | 1.5940 | 0.1860 |
| 1.5166 | 37.0 | 222 | 1.5940 | 0.1860 |
| 1.5166 | 38.0 | 228 | 1.5939 | 0.1860 |
| 1.5127 | 39.0 | 234 | 1.5939 | 0.1860 |
| 1.4974 | 40.0 | 240 | 1.5939 | 0.1860 |
| 1.4974 | 41.0 | 246 | 1.5939 | 0.1860 |
| 1.4716 | 42.0 | 252 | 1.5939 | 0.1860 |
| 1.4716 | 43.0 | 258 | 1.5939 | 0.1860 |
| 1.5277 | 44.0 | 264 | 1.5939 | 0.1860 |
| 1.501 | 45.0 | 270 | 1.5939 | 0.1860 |
| 1.501 | 46.0 | 276 | 1.5939 | 0.1860 |
| 1.4805 | 47.0 | 282 | 1.5939 | 0.1860 |
| 1.4805 | 48.0 | 288 | 1.5939 | 0.1860 |
| 1.5052 | 49.0 | 294 | 1.5939 | 0.1860 |
| 1.536 | 50.0 | 300 | 1.5939 | 0.1860 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Santp98/distilbert-base-spanish-uncased-2023-11-13-18-01 | Santp98 | 2023-11-13T19:16:11Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-13T18:01:10Z | ---
base_model: dccuchile/distilbert-base-spanish-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-spanish-uncased-2023-11-13-18-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-spanish-uncased-2023-11-13-18-01
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5316 | 0.59 | 500 | 2.2591 |
| 2.2197 | 1.19 | 1000 | 1.9800 |
| 2.0108 | 1.78 | 1500 | 1.8304 |
| 1.855 | 2.38 | 2000 | 1.7229 |
| 1.7933 | 2.97 | 2500 | 1.6561 |
| 1.7466 | 3.56 | 3000 | 1.6101 |
| 1.6906 | 4.16 | 3500 | 1.5588 |
| 1.6301 | 4.75 | 4000 | 1.5264 |
| 1.6131 | 5.34 | 4500 | 1.4893 |
| 1.5731 | 5.94 | 5000 | 1.4626 |
| 1.5248 | 6.53 | 5500 | 1.4445 |
| 1.509 | 7.13 | 6000 | 1.4274 |
| 1.5083 | 7.72 | 6500 | 1.4109 |
| 1.5277 | 8.31 | 7000 | 1.4051 |
| 1.4884 | 8.91 | 7500 | 1.3872 |
| 1.496 | 9.5 | 8000 | 1.3912 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_1x_deit_small_sgd_00001_fold1 | hkivancoral | 2023-11-13T19:16:04Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:13:17Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_sgd_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.28888888888888886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5045
- Accuracy: 0.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.5103 | 0.2889 |
| 1.5406 | 2.0 | 12 | 1.5100 | 0.2889 |
| 1.5406 | 3.0 | 18 | 1.5097 | 0.2889 |
| 1.5187 | 4.0 | 24 | 1.5094 | 0.2889 |
| 1.5371 | 5.0 | 30 | 1.5091 | 0.2889 |
| 1.5371 | 6.0 | 36 | 1.5089 | 0.2889 |
| 1.517 | 7.0 | 42 | 1.5086 | 0.2889 |
| 1.517 | 8.0 | 48 | 1.5084 | 0.2889 |
| 1.5407 | 9.0 | 54 | 1.5081 | 0.2889 |
| 1.5157 | 10.0 | 60 | 1.5079 | 0.2889 |
| 1.5157 | 11.0 | 66 | 1.5077 | 0.2889 |
| 1.5121 | 12.0 | 72 | 1.5074 | 0.2889 |
| 1.5121 | 13.0 | 78 | 1.5072 | 0.2889 |
| 1.538 | 14.0 | 84 | 1.5070 | 0.2889 |
| 1.5262 | 15.0 | 90 | 1.5068 | 0.2889 |
| 1.5262 | 16.0 | 96 | 1.5066 | 0.2889 |
| 1.5233 | 17.0 | 102 | 1.5064 | 0.2889 |
| 1.5233 | 18.0 | 108 | 1.5063 | 0.2889 |
| 1.5376 | 19.0 | 114 | 1.5061 | 0.2889 |
| 1.5005 | 20.0 | 120 | 1.5060 | 0.2889 |
| 1.5005 | 21.0 | 126 | 1.5058 | 0.2889 |
| 1.5271 | 22.0 | 132 | 1.5057 | 0.2889 |
| 1.5271 | 23.0 | 138 | 1.5056 | 0.2889 |
| 1.5205 | 24.0 | 144 | 1.5055 | 0.2889 |
| 1.5085 | 25.0 | 150 | 1.5054 | 0.2889 |
| 1.5085 | 26.0 | 156 | 1.5053 | 0.2889 |
| 1.5221 | 27.0 | 162 | 1.5052 | 0.2889 |
| 1.5221 | 28.0 | 168 | 1.5051 | 0.2889 |
| 1.5344 | 29.0 | 174 | 1.5050 | 0.2889 |
| 1.5325 | 30.0 | 180 | 1.5049 | 0.2889 |
| 1.5325 | 31.0 | 186 | 1.5048 | 0.2889 |
| 1.5365 | 32.0 | 192 | 1.5048 | 0.2889 |
| 1.5365 | 33.0 | 198 | 1.5047 | 0.2889 |
| 1.5421 | 34.0 | 204 | 1.5046 | 0.2889 |
| 1.5276 | 35.0 | 210 | 1.5046 | 0.2889 |
| 1.5276 | 36.0 | 216 | 1.5046 | 0.2889 |
| 1.5101 | 37.0 | 222 | 1.5045 | 0.2889 |
| 1.5101 | 38.0 | 228 | 1.5045 | 0.2889 |
| 1.5025 | 39.0 | 234 | 1.5045 | 0.2889 |
| 1.5405 | 40.0 | 240 | 1.5045 | 0.2889 |
| 1.5405 | 41.0 | 246 | 1.5045 | 0.2889 |
| 1.5373 | 42.0 | 252 | 1.5045 | 0.2889 |
| 1.5373 | 43.0 | 258 | 1.5045 | 0.2889 |
| 1.5465 | 44.0 | 264 | 1.5045 | 0.2889 |
| 1.4924 | 45.0 | 270 | 1.5045 | 0.2889 |
| 1.4924 | 46.0 | 276 | 1.5045 | 0.2889 |
| 1.521 | 47.0 | 282 | 1.5045 | 0.2889 |
| 1.521 | 48.0 | 288 | 1.5045 | 0.2889 |
| 1.494 | 49.0 | 294 | 1.5045 | 0.2889 |
| 1.5268 | 50.0 | 300 | 1.5045 | 0.2889 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_1x_deit_small_sgd_0001_fold5 | hkivancoral | 2023-11-13T19:04:09Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-13T19:01:23Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.2682926829268293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4475
- Accuracy: 0.2683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.5161 | 0.2439 |
| 1.5359 | 2.0 | 12 | 1.5118 | 0.2439 |
| 1.5359 | 3.0 | 18 | 1.5076 | 0.2439 |
| 1.5171 | 4.0 | 24 | 1.5040 | 0.2439 |
| 1.5208 | 5.0 | 30 | 1.5002 | 0.2683 |
| 1.5208 | 6.0 | 36 | 1.4969 | 0.2683 |
| 1.5066 | 7.0 | 42 | 1.4937 | 0.2683 |
| 1.5066 | 8.0 | 48 | 1.4908 | 0.2683 |
| 1.4941 | 9.0 | 54 | 1.4878 | 0.2683 |
| 1.4953 | 10.0 | 60 | 1.4851 | 0.2683 |
| 1.4953 | 11.0 | 66 | 1.4825 | 0.2683 |
| 1.498 | 12.0 | 72 | 1.4798 | 0.2683 |
| 1.498 | 13.0 | 78 | 1.4774 | 0.2683 |
| 1.465 | 14.0 | 84 | 1.4753 | 0.2683 |
| 1.4811 | 15.0 | 90 | 1.4730 | 0.2683 |
| 1.4811 | 16.0 | 96 | 1.4709 | 0.2683 |
| 1.476 | 17.0 | 102 | 1.4689 | 0.2683 |
| 1.476 | 18.0 | 108 | 1.4672 | 0.2683 |
| 1.4977 | 19.0 | 114 | 1.4656 | 0.2683 |
| 1.4745 | 20.0 | 120 | 1.4639 | 0.2683 |
| 1.4745 | 21.0 | 126 | 1.4624 | 0.2683 |
| 1.4662 | 22.0 | 132 | 1.4609 | 0.2683 |
| 1.4662 | 23.0 | 138 | 1.4594 | 0.2683 |
| 1.4905 | 24.0 | 144 | 1.4581 | 0.2683 |
| 1.465 | 25.0 | 150 | 1.4568 | 0.2683 |
| 1.465 | 26.0 | 156 | 1.4556 | 0.2683 |
| 1.4499 | 27.0 | 162 | 1.4545 | 0.2683 |
| 1.4499 | 28.0 | 168 | 1.4535 | 0.2683 |
| 1.473 | 29.0 | 174 | 1.4527 | 0.2683 |
| 1.4704 | 30.0 | 180 | 1.4520 | 0.2683 |
| 1.4704 | 31.0 | 186 | 1.4512 | 0.2683 |
| 1.4654 | 32.0 | 192 | 1.4506 | 0.2683 |
| 1.4654 | 33.0 | 198 | 1.4500 | 0.2683 |
| 1.4322 | 34.0 | 204 | 1.4494 | 0.2683 |
| 1.459 | 35.0 | 210 | 1.4490 | 0.2683 |
| 1.459 | 36.0 | 216 | 1.4486 | 0.2683 |
| 1.4499 | 37.0 | 222 | 1.4482 | 0.2683 |
| 1.4499 | 38.0 | 228 | 1.4480 | 0.2683 |
| 1.4314 | 39.0 | 234 | 1.4477 | 0.2683 |
| 1.4745 | 40.0 | 240 | 1.4476 | 0.2683 |
| 1.4745 | 41.0 | 246 | 1.4476 | 0.2683 |
| 1.4482 | 42.0 | 252 | 1.4475 | 0.2683 |
| 1.4482 | 43.0 | 258 | 1.4475 | 0.2683 |
| 1.4526 | 44.0 | 264 | 1.4475 | 0.2683 |
| 1.4693 | 45.0 | 270 | 1.4475 | 0.2683 |
| 1.4693 | 46.0 | 276 | 1.4475 | 0.2683 |
| 1.4506 | 47.0 | 282 | 1.4475 | 0.2683 |
| 1.4506 | 48.0 | 288 | 1.4475 | 0.2683 |
| 1.4529 | 49.0 | 294 | 1.4475 | 0.2683 |
| 1.4667 | 50.0 | 300 | 1.4475 | 0.2683 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
eljandoubi/whisper-tiny-en | eljandoubi | 2023-11-13T18:59:37Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-11-13T17:42:31Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3451917732073374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6879
- Wer Ortho: 0.3510
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0008 | 17.24 | 500 | 0.6879 | 0.3510 | 0.3452 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Subsets and Splits