modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
PhillipGuo/hp-lat-llama-No_PCA-epsilon6.0-pgd_layer8_16_24_30-def_layer0-wikitext-9 | PhillipGuo | 2024-05-24T23:29:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T23:29:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PhillipGuo/hp-lat-llama-No_PCA-epsilon0.5-pgd_layer8_16_24_30-def_layer0-wikitext-10 | PhillipGuo | 2024-05-24T23:29:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T23:29:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PhillipGuo/hp-lat-llama-No_PCA-epsilon1.5-pgd_layer8_16_24_30-def_layer0-wikitext-9 | PhillipGuo | 2024-05-24T23:28:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T23:28:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PhillipGuo/hp-lat-llama-No_PCA-epsilon0.5-pgd_layer8_16_24_30-def_layer0-wikitext-9 | PhillipGuo | 2024-05-24T23:28:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T23:27:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
etxereal/q-FrozenLake-v1-4x4-noSlippery | etxereal | 2024-05-24T23:28:05Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T23:27:20Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="etxereal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles4_seed2_q1 | Omriy123 | 2024-05-24T23:24:54Z | 221 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T23:11:00Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit_epochs5_batch32_lr5e-05_size224_tiles4_seed2_q1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dogs_vs_Cats
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_epochs5_batch32_lr5e-05_size224_tiles4_seed2_q1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0483
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.044 | 1.0 | 469 | 0.0483 | 0.9832 |
| 0.0205 | 2.0 | 938 | 0.0737 | 0.9824 |
| 0.0003 | 3.0 | 1407 | 0.0791 | 0.984 |
| 0.0001 | 4.0 | 1876 | 0.0713 | 0.9861 |
| 0.0001 | 5.0 | 2345 | 0.0712 | 0.9864 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
KuriT/dummy-model | KuriT | 2024-05-24T23:10:51Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-24T23:10:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JawadC/scarmoza | JawadC | 2024-05-24T23:06:10Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-24T22:39:17Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of SCARMOZA cheese
widget:
- text: A small yellow ball of SCARMOZA cheese on a rustic wooden table.
output:
url: image_0.png
- text: A small yellow ball of SCARMOZA cheese on a rustic wooden table.
output:
url: image_1.png
- text: A small yellow ball of SCARMOZA cheese on a rustic wooden table.
output:
url: image_2.png
- text: A small yellow ball of SCARMOZA cheese on a rustic wooden table.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/scarmoza
<Gallery />
## Model description
These are JawadC/scarmoza LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of SCARMOZA cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/scarmoza/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf | RichardErkhov | 2024-05-24T23:04:42Z | 24 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-05-24T19:43:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OpenBeagle-11B - GGUF
- Model creator: https://huggingface.co/vicgalle/
- Original model: https://huggingface.co/vicgalle/OpenBeagle-11B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OpenBeagle-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q2_K.gguf) | Q2_K | 3.73GB |
| [OpenBeagle-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [OpenBeagle-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [OpenBeagle-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [OpenBeagle-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [OpenBeagle-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q3_K.gguf) | Q3_K | 4.84GB |
| [OpenBeagle-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [OpenBeagle-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [OpenBeagle-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [OpenBeagle-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q4_0.gguf) | Q4_0 | 5.66GB |
| [OpenBeagle-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [OpenBeagle-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [OpenBeagle-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q4_K.gguf) | Q4_K | 6.02GB |
| [OpenBeagle-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [OpenBeagle-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q4_1.gguf) | Q4_1 | 6.27GB |
| [OpenBeagle-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q5_0.gguf) | Q5_0 | 6.89GB |
| [OpenBeagle-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [OpenBeagle-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q5_K.gguf) | Q5_K | 7.08GB |
| [OpenBeagle-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [OpenBeagle-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q5_1.gguf) | Q5_1 | 7.51GB |
| [OpenBeagle-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q6_K.gguf) | Q6_K | 8.2GB |
| [OpenBeagle-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_OpenBeagle-11B-gguf/blob/main/OpenBeagle-11B.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: apache-2.0
library_name: transformers
datasets:
- vicgalle/OpenHermesPreferences-1k
model-index:
- name: OpenBeagle-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B
name: Open LLM Leaderboard
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenBeagle-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.85|
|AI2 Reasoning Challenge (25-Shot)|70.48|
|HellaSwag (10-Shot) |88.76|
|MMLU (5-Shot) |66.94|
|TruthfulQA (0-shot) |67.01|
|Winogrande (5-shot) |83.50|
|GSM8k (5-shot) |66.41|
|
berkaygkv/kullm-f16-q5 | berkaygkv | 2024-05-24T23:00:42Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T22:56:12Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** berkaygkv
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/borschtmix_v10 | LarryAIDraw | 2024-05-24T22:56:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-24T22:29:29Z | ---
license: creativeml-openrail-m
---
|
hgnoi/b7L3i6zpmXSdU8Xw | hgnoi | 2024-05-24T22:55:30Z | 88 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T22:53:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
soufyane/gemma_2b_FT_DataScience_V1 | soufyane | 2024-05-24T22:53:14Z | 7 | 1 | keras-nlp | [
"keras-nlp",
"text-generation",
"region:us"
] | text-generation | 2024-05-24T22:48:57Z | ---
library_name: keras-nlp
pipeline_tag: text-generation
---
This is a [`Gemma` model](https://keras.io/api/keras_nlp/models/gemma) uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** gemma_backbone
* **trainable:** True
* **vocabulary_size:** 256000
* **num_layers:** 18
* **num_query_heads:** 8
* **num_key_value_heads:** 1
* **hidden_dim:** 2048
* **intermediate_dim:** 32768
* **head_dim:** 256
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
vicaloy/Llama-2-70B-chat-hf-dmcvs-full | vicaloy | 2024-05-24T22:46:20Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-21T22:28:53Z | ---
license: other
library_name: transformers
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
deoates/lora_llama_8b_ft_product_summaries_3000_examples_2_epoch | deoates | 2024-05-24T22:43:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T22:43:01Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** deoates
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qgallouedec/idefics2-cord-demo-v2 | qgallouedec | 2024-05-24T22:41:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T17:46:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
digiplay/2K-VAE | digiplay | 2024-05-24T22:40:26Z | 472 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-01T15:01:06Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
library_name: diffusers
---
2K+840000VAE merged
Generated by Hugginface's API:
digital painting, anime, trending on artstation close up of pretty cute asian girl, tattoos, centered, (messy bun), blue eyes, pale skin, behind trees, (high detailed skin:1.2), beach, Fujifilm XT3, (high detailed face:1.3),canvas by Mucha and ROSSDRAWS,





Generated by AUTOMATIC 1111:
 |
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles4_seed1_q1 | Omriy123 | 2024-05-24T22:39:13Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T22:24:48Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit_epochs5_batch32_lr5e-05_size224_tiles4_seed1_q1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dogs_vs_Cats
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9842666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_epochs5_batch32_lr5e-05_size224_tiles4_seed1_q1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0512
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0442 | 1.0 | 469 | 0.0512 | 0.9843 |
| 0.0388 | 2.0 | 938 | 0.0551 | 0.9864 |
| 0.0002 | 3.0 | 1407 | 0.0632 | 0.9885 |
| 0.0001 | 4.0 | 1876 | 0.0605 | 0.9875 |
| 0.0001 | 5.0 | 2345 | 0.0602 | 0.988 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
apwic/nerugm-lora-r4a0d0.1 | apwic | 2024-05-24T22:36:26Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-24T14:54:04Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r4a0d0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r4a0d0.1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1302
- Precision: 0.7375
- Recall: 0.8605
- F1: 0.7943
- Accuracy: 0.9573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7665 | 1.0 | 528 | 0.4290 | 0.3803 | 0.1255 | 0.1887 | 0.8711 |
| 0.336 | 2.0 | 1056 | 0.2177 | 0.6187 | 0.7751 | 0.6882 | 0.9335 |
| 0.2067 | 3.0 | 1584 | 0.1743 | 0.6523 | 0.8187 | 0.7261 | 0.9410 |
| 0.1734 | 4.0 | 2112 | 0.1525 | 0.7026 | 0.8443 | 0.7670 | 0.9500 |
| 0.1557 | 5.0 | 2640 | 0.1442 | 0.7125 | 0.8512 | 0.7757 | 0.9524 |
| 0.146 | 6.0 | 3168 | 0.1445 | 0.7085 | 0.8629 | 0.7781 | 0.9520 |
| 0.1397 | 7.0 | 3696 | 0.1444 | 0.7145 | 0.8768 | 0.7874 | 0.9525 |
| 0.1338 | 8.0 | 4224 | 0.1386 | 0.7262 | 0.8675 | 0.7906 | 0.9545 |
| 0.1277 | 9.0 | 4752 | 0.1365 | 0.7395 | 0.8629 | 0.7965 | 0.9561 |
| 0.1255 | 10.0 | 5280 | 0.1332 | 0.7348 | 0.8629 | 0.7937 | 0.9563 |
| 0.1215 | 11.0 | 5808 | 0.1330 | 0.7242 | 0.8652 | 0.7885 | 0.9557 |
| 0.1189 | 12.0 | 6336 | 0.1340 | 0.7342 | 0.8652 | 0.7943 | 0.9561 |
| 0.1179 | 13.0 | 6864 | 0.1295 | 0.7445 | 0.8582 | 0.7973 | 0.9571 |
| 0.114 | 14.0 | 7392 | 0.1295 | 0.7446 | 0.8675 | 0.8014 | 0.9579 |
| 0.1128 | 15.0 | 7920 | 0.1317 | 0.7371 | 0.8652 | 0.7960 | 0.9571 |
| 0.1115 | 16.0 | 8448 | 0.1300 | 0.7376 | 0.8675 | 0.7973 | 0.9575 |
| 0.1109 | 17.0 | 8976 | 0.1307 | 0.7357 | 0.8652 | 0.7952 | 0.9577 |
| 0.1097 | 18.0 | 9504 | 0.1319 | 0.7386 | 0.8652 | 0.7969 | 0.9575 |
| 0.1086 | 19.0 | 10032 | 0.1296 | 0.7375 | 0.8605 | 0.7943 | 0.9573 |
| 0.1094 | 20.0 | 10560 | 0.1302 | 0.7375 | 0.8605 | 0.7943 | 0.9573 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
TERR1E/fungiclef | TERR1E | 2024-05-24T22:32:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T22:09:44Z | ---
license: apache-2.0
---
|
RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf | RichardErkhov | 2024-05-24T22:32:29Z | 11 | 0 | null | [
"gguf",
"arxiv:2310.04799",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T20:18:10Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-KoEn-8B-Instruct-preview - GGUF
- Model creator: https://huggingface.co/beomi/
- Original model: https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-KoEn-8B-Instruct-preview.Q2_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3-KoEn-8B-Instruct-preview.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3-KoEn-8B-Instruct-preview.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3-KoEn-8B-Instruct-preview.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q3_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3-KoEn-8B-Instruct-preview.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q4_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3-KoEn-8B-Instruct-preview.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q4_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q4_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q5_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q5_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q5_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q6_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3-KoEn-8B-Instruct-preview.Q8_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf/blob/main/Llama-3-KoEn-8B-Instruct-preview.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
- ko
license: cc-by-nc-sa-4.0
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
---
## Llama-3-KoEn-8B-Instruct-preview
> Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B model](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
## Model Details
**Llama-3-KoEn-8B-Instruct-preview**
Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.
The train was done on TPUv4-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799),
I released Instruction model named [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-KoEn-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
피보나치 수열은 이탈리아 수학자 레오나르도 피보나치의 이름을 따서 유래된 수열입니다. 피보나치 수열은 특정 조건에 따라 얻어지는 등차수열로, 다음 수를 구하는 단순한 규칙에 의해 생성되는 수열입니다.
1, 1, 2, 3, 5, 8, 13, 21,... (등차 수열)
이러한 수열은 자연 세계에도 많이 발견됩니다. 예를 들어, 새싹이 자라는 방식, 달걀이 부화되는 횟수, 코끼리의 생장 등이 피보나치 수열을 따르기 때문입니다. 🌱🦆🐮
지금부터는 파이썬 코드를 작성해 볼까요? 😊
먼저, 파이썬 코드를 작성하기 위해 필요한 모듈은 없습니다. 저희는 피보나치 수열을 생성하는 알고리즘을 작성할 것입니다.
1. 파이썬 코드 생성
다음은 피보나치 수열을 생성하는 코드입니다. 이 코드는 `while` 루프를 사용하여 피보나치 수열을 생성합니다.
```python
def fib(n):
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n+1):
a, b = b, a + b
return b
n = int(input("피보나치 수열을 얼마나 생성할지 입력하세요: "))
for i in range(n):
print(fib(i+1))
```
2. 코드 설명
위 코드는 `fib(n)` 함수를 정의합니다. 이 함수는 `n`번째 피보나치 수를 반환합니다.
`fib(n)` 함수는 `if` 조건문을 사용하여 `n`이 0 또는 1일 경우 해당 값은 즉시 반환합니다.
`a, b = 0, 1`은 피보나치 수열의 초기 값을 설정합니다. `a`는 `0`을, `b`는 `1`을 할당합니다.
`for` 루프는 `range(2, n+1)`에서 생성되는 반복을 수행합니다. 이 반복은 `a, b = b, a + b`를 수행하여 피보나치 수열을 생성합니다.
`print(fib(i+1))`는 `fib(n)` 함수 호출 결과를 출력합니다.
3. 코드 실행
위 코드를 실행하기 위해 파이썬 인터프레터에서 다음과 같이 실행할 수 있습니다.
```python
$ python fib.py
피보나치 수열을 얼마나 생성할지 입력하세요: 10
1
1
2
3
5
8
13
21
34
55
```
위 코드를 실행하면, 사용자는 `피보나치 수열을 얼마나 생성할지 입력하세요:`를 입력할 수 있습니다. 이 입력은 `n`에 할당됩니다. `n`은 코드가 피보나치 수열을 생성하는 횟수를 결정하는 변수입니다. 사용자는 이 횟수를 입력하여 피보나치 수열을 생성할 수 있습니다.
이상 파이썬 코드를 작성하고 실행하는 방법을 설명했습니다. 😊
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
---
대한민국 헌법은 1948년 7월 17일 제정된 이래 현재까지 대한민국의 기본법인 국가의 기본질서와 국가기관의 권한, 국민의 권리·의무, 통치조직, 통치방법 등을 규정하는 법률입니다. 헌법은 대한민국의 최고법령으로, 일반법상 모든 법률의 근거가 됩니다. 또한 헌법은 1987년 제정된 제6공화국 헌법부터는 '민주공화국'을 기본질서로 규정하고 있습니다.
** 주요 내용 **
1. 기본질서 : 민주공화국, 통일조국의 원칙, 인권의 보장
2. 국민의 권리 : 신체의 자유, 기본적 인권, 재산권, 참정권
3. 국가기관 : 대통령, 국무총리, 국회, 대법원, 헌법재판소, 검찰
4. 통치조직 : 행정부, 입법부, 사법부의 분리
5. 통치방법 : 민주주의(국민의 주권에 의하여 국민으로부터 위임받은 권력 행사), 법치주의(법률에 의하여 통치)
** 주요 조항 **
1. 제1조 : 대한민국은 민주공화국임을 천명
2. 제2조 : 민주공화국의 주권은 국민에게 있으며, 모든 권력은 국민으로부터 나온다
3. 제10조 : 모든 국민은 행복을 추구할 권리를 가진다
4. 제13조 : 모든 국민은 주거의 자유를 갖는다
5. 제25조 : 국가는 재해를 방지하고 그에 대처하는 시책을 시행하여야 한다
6. 제64조 : 대통령은 법률을 공포하고, 국무총리와 국무위원을 임명한다
7. 제111조 : 대법원은 최종적인 법률심판을 행하고, 헌법재판소는 헌법에 위반되는지 여부를 결정
** 헌법의 효력 **
헌법은 대한민국의 기본법률이므로 모든 국민은 헌법에 따라야 합니다. 또한 헌법은 변경하려면 헌법개정안을 제안하고, 이를 국민투표에 붙여야 하며, 이를 통과하면 새 헌법을 제정하게 됩니다. (제130조)
이상, 대한민국 헌법의 주요 내용과 조항, 효력을 설명하였습니다. 한국 헌법은 민주공화국의 기본질서를 중시하고, 기본권을 보호할 뿐만 아니라, 통치조직의 분리, 법치주의, 민주주의를 근간으로 합니다. 🇰🇷
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "이순신 장군에 대해 설명해줘."},
]
---
이순신 장군은 조선의 문신, 무신, 해군의 영웅으로 1592년부터 1598년까지 임진왜란과 정유재란 동안 해군으로서 활약했습니다. 그는 1591년 전라좌도 수군절도사가 되어 선제공격 전통을 이어받은 바, '신속성'과 '신용', '도탄전' 등의 병법을 도입하여 조선 수군을 개편했습니다. 이순신 장군의 가장 큰 업적으로는 두 가지를 꼽을 수 있습니다. 첫 번째는 '한산도 대첩'입니다. 1592년 7월 8일, 이순신 장군은 이끄는 조선 수군 13척이 왜군 133척에 맞서 13척의 배로는 불가능한 전쟁을 벌이게 됩니다. 이때 이순신 장군은 '진퇴유곡'의 전술을 사용하여 13척의 배를 133척의 배와 싸워 승리를 거두었습니다. 두 번째는 '노량해전'입니다. 1598년 11월 14일, 이순신 장군은 노량에서 12척의 배로 330척의 배를 이끄는 와키자카 야스하루와 싸우게 됩니다. 이때 이순신 장군의 배에 명중된 총알이 1발이었고, 12척의 배가 고전을 면치 못했지만, 이순신 장군은 끝까지 싸워 2명을 제외하고는 전원 전사하게 됩니다. 이순신 장군의 전투를 비롯하여 그의 업적은 조선 수군의 전투 승리를 이끌었으며, 그의 이름은 한국의 국보 제76호로 등재되어 있습니다. 이순신 장군의 업적과 정신은 한국 역사와 문화에 지대한 영향을 끼쳤습니다.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "독도와 다케시마의 차이가 뭐야?"},
]
---
! `한국의 고유영토`와 `일본의 영유권 과장` 차이! 🌊
일본이 주장하는 `다케시마(竹島)`는, 실제로는 '울릉도' 서쪽 약 87km 지점에 있는 '독도(獨島)'를 가리킵니다. 한국이 역사적으로부터 지배하고 관리해온 것은 물론, 유네스코 세계유산으로 등재된 `대한민국의 영토`이지요. 😎
한편, 독도는 한국의 고유 이름으로, `울릉도`와 함께 `삼국사기`에 처음 기재된 `대한민국의 고유 섬`입니다. 1886년 일본이 '다케시마'라고 명명한 이래, 공식적으로 `일본 고유 섬`으로 주장하고 있지만, 이는 `역사 왜곡`입니다! ⚠️
따라서, 세계적으로 인정받는 국가의 `영토`는 한국의 `독도`가 맞는 것이지요! 🔥 `울릉도`도 `독도`도 한국의 고유 영토에 속한 섬들이랍니다! 💕
혹시 더 궁금한 사실이 있으면 말씀해주세요! 🤔
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---
아주 흥미로운 사건입니다. 세종대왕이 집현전에서 맥북을 던졌다는 이야기는 없습니다. 실제로 세종대왕은 1418년 1월 9일 집현전에서 26명의 문신과 함께 새로운 훈민정음 반포를 위해 간자체의 글씨를 사용하는 것을 반대한 홍윤성 등의 무리와의 논쟁을 벌였습니다. 당시 세종대왕은 이들의 주장을 단호히 거부했습니다. 이 사건은 '집현전 논박 사건'으로 알려져 있습니다. 맥북과 관련된 일이 아니라는 것이죠. 맥북은 2006년 애플이 출시한 노트북의 브랜드명입니다. 세종대왕이 살았던 시기는 1418년에서 1450년까지입니다. 맥북은 훨씬 뒤의 일입니다. 😅
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
---
👀 당신은 오늘 5개의 사과를 갖고 있습니다. 그리고 어제 2개를 먹었다고 하셨으니, 오늘 현재 당신이 갖고 있는 사과는 5개에서 2개를 뺀 3개입니다! 😊
```
|
OwOpeepeepoopoo/DancingElaine10 | OwOpeepeepoopoo | 2024-05-24T22:28:28Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T15:16:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cacarekt/manuelabem_dbmdzbert | cacarekt | 2024-05-24T22:28:05Z | 184 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-24T22:27:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoboldAI/GPT-J-6B-Skein-GGML | KoboldAI | 2024-05-24T22:26:06Z | 0 | 2 | null | [
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T21:43:13Z | ---
license: apache-2.0
---
# GPT-J-Skein - GGML Edition
This is the GGML port of our classic GPT-J-Skein model, a 6B model focussed on text adventures with additional novel data.
It was a beloved text adventure and even writing model, back in the day people used the anti You bias userscript to enhance its writing ability.
Later it was remade as Skein-20B which we also intend to convert to GGUF.
### GGML in 2024, really?
Yes, GPT-J never saw adoption by Llamacpp and until this changes we have to rely on older code that originated from the pygmalioncpp project and that still lives on in KoboldCpp today.
This model release was tested to work in KoboldCpp 1.66, but due to the age of the format does come with limitations.
### What are the limitations of this conversion?
This format dates back to a time where K quants did not exist yet, so you will only be able to use regular quants or the FP16 version.
Likewise a lot of modern features will be missing from the engine, you can still use smartcontext but you can't use context shifting.
You can offload if you have a CUDA compatible GPU (ROCm is untested but may work), for full acceleration it is required to have every layer on the GPU.
For non Nvidia GPU's you can use CLBlast to speedup the prompt processing, Vulkan does not support these older GGML models as it does not exist in our legacy code.
Rope scaling even though its a much newer feature should be compatible, we also expect some of the more modern samplers to be compatible.
### I don't use KoboldCpp, can I use it in X?
No, this upload is only meant for use with KoboldCpp.
If you haven't tried KoboldCpp yet go give it a try! You can find it on https://koboldai.org/cpp
### How was this conversion done?
Inside KoboldCpp's source code you can find otherarch/tools/convert_hf_gptj.py
The relevant quantize_gptj can be compiled by using make tools in the KoboldCpp source code root directory.
---
# Model Card for GPT-J-6B-Skein
# Model Details
## Model Description
- **Developed by:** KoboldAI
- **Shared by [Optional]:** KoboldAI
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
- **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite)
- **Parent Model:** GPT-J
- **Resources for more information:**
- [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax)
- [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM)
# Uses
## Direct Use
This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as:
```
You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is."
"No," you respond, uneasy. You open your mouth and close it again.
> You ask about the experience of waking up
```
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt).
## Training Procedure
### Preprocessing
The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages.
### Speeds, Sizes, Times
Training took approximately 14 hours in total, with the average speed being 5265 tokens per second.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
https://github.com/kingoflolz/mesh-transformer-jax
# Citation
**BibTeX:**
```
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein")
model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein")
```
</details>
|
AmirrezaV1/emotion_pytorch | AmirrezaV1 | 2024-05-24T22:24:50Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-24T21:13:33Z | ---
license: apache-2.0
---
|
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles3_seed1_q1 | Omriy123 | 2024-05-24T22:23:53Z | 221 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T22:09:43Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit_epochs5_batch32_lr5e-05_size224_tiles3_seed1_q1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dogs_vs_Cats
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9829333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_epochs5_batch32_lr5e-05_size224_tiles3_seed1_q1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0595
- Accuracy: 0.9829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0888 | 1.0 | 469 | 0.1256 | 0.9683 |
| 0.0005 | 2.0 | 938 | 0.0595 | 0.9829 |
| 0.0002 | 3.0 | 1407 | 0.0620 | 0.9861 |
| 0.0001 | 4.0 | 1876 | 0.0645 | 0.9861 |
| 0.0001 | 5.0 | 2345 | 0.0625 | 0.9875 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
JawadC/reblochon | JawadC | 2024-05-24T22:23:08Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-24T21:25:33Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of REBLOCHON cheese
widget:
- text: A REBLOCHON cheese sits on a worn wooden table, bathed in warm sunlight.
output:
url: image_0.png
- text: A REBLOCHON cheese sits on a worn wooden table, bathed in warm sunlight.
output:
url: image_1.png
- text: A REBLOCHON cheese sits on a worn wooden table, bathed in warm sunlight.
output:
url: image_2.png
- text: A REBLOCHON cheese sits on a worn wooden table, bathed in warm sunlight.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/reblochon
<Gallery />
## Model description
These are JawadC/reblochon LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of REBLOCHON cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/reblochon/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
santoshbt/sft-tiny-chatbot | santoshbt | 2024-05-24T22:17:09Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T20:44:40Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
pszemraj/Mistral-7B-v0.3-prune6 | pszemraj | 2024-05-24T22:15:47Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:07:50Z | ---
base_model:
- mistralai/Mistral-7B-v0.3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# Mistral-7B-v0.3-prune6
This is a layer-pruned pre-trained language model sliced with [mergekit](https://github.com/cg123/mergekit). No additional training.
## layer selection

## Quick eval
Quick eval for: pszemraj/Mistral-7B-v0.3-prune6
hf (pretrained=pszemraj/Mistral-7B-v0.3-prune6,trust_remote_code=True,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
| Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
|--------------|------:|------|-----:|----------|------:|---|-----:|
|arc_easy | 1|none | 0|acc | 0.6393|± |0.0099|
| | |none | 0|acc_norm | 0.6309|± |0.0099|
|boolq | 2|none | 0|acc | 0.7599|± |0.0075|
|lambada_openai| 1|none | 0|perplexity|10.1184|± |0.2771|
| | |none | 0|acc | 0.5507|± |0.0069|
|openbookqa | 1|none | 0|acc | 0.2200|± |0.0185|
| | |none | 0|acc_norm | 0.3580|± |0.0215|
|piqa | 1|none | 0|acc | 0.7203|± |0.0105|
| | |none | 0|acc_norm | 0.7350|± |0.0103|
|winogrande | 1|none | 0|acc | 0.6906|± |0.0130|
### original
bootstrapping for stddev: perplexity
hf (pretrained=mistralai/Mistral-7B-v0.3,trust_remote_code=True,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|------:|------|-----:|----------|-----:|---|-----:|
|arc_easy | 1|none | 0|acc |0.7959|± |0.0083|
| | |none | 0|acc_norm |0.7832|± |0.0085|
|boolq | 2|none | 0|acc |0.8202|± |0.0067|
|lambada_openai| 1|none | 0|perplexity|3.2578|± |0.0601|
| | |none | 0|acc |0.7518|± |0.0060|
|openbookqa | 1|none | 0|acc |0.3340|± |0.0211|
| | |none | 0|acc_norm |0.4420|± |0.0222|
|piqa | 1|none | 0|acc |0.8009|± |0.0093|
| | |none | 0|acc_norm |0.8215|± |0.0089|
|winogrande | 1|none | 0|acc |0.7380|± |0.0124|
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 22]
model: mistralai/Mistral-7B-v0.3
- sources:
- layer_range: [28, 32]
model: mistralai/Mistral-7B-v0.3
``` |
apwic/nerugm-lora-r4a0d0.05 | apwic | 2024-05-24T22:15:11Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-24T14:52:25Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r4a0d0.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r4a0d0.05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1305
- Precision: 0.7407
- Recall: 0.8698
- F1: 0.8001
- Accuracy: 0.9579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7682 | 1.0 | 528 | 0.4394 | 0.4048 | 0.1185 | 0.1834 | 0.8663 |
| 0.3466 | 2.0 | 1056 | 0.2217 | 0.6022 | 0.7379 | 0.6632 | 0.9327 |
| 0.2131 | 3.0 | 1584 | 0.1728 | 0.6765 | 0.8396 | 0.7493 | 0.9428 |
| 0.1759 | 4.0 | 2112 | 0.1509 | 0.7221 | 0.8559 | 0.7833 | 0.9516 |
| 0.1563 | 5.0 | 2640 | 0.1422 | 0.7303 | 0.8605 | 0.7901 | 0.9533 |
| 0.1464 | 6.0 | 3168 | 0.1429 | 0.7202 | 0.8722 | 0.7890 | 0.9541 |
| 0.1394 | 7.0 | 3696 | 0.1440 | 0.7153 | 0.8745 | 0.7869 | 0.9525 |
| 0.1325 | 8.0 | 4224 | 0.1398 | 0.7274 | 0.8791 | 0.7961 | 0.9553 |
| 0.1269 | 9.0 | 4752 | 0.1341 | 0.7420 | 0.8675 | 0.7999 | 0.9579 |
| 0.124 | 10.0 | 5280 | 0.1331 | 0.7379 | 0.8768 | 0.8014 | 0.9565 |
| 0.1194 | 11.0 | 5808 | 0.1329 | 0.7389 | 0.8815 | 0.8039 | 0.9569 |
| 0.1171 | 12.0 | 6336 | 0.1337 | 0.7384 | 0.8791 | 0.8027 | 0.9567 |
| 0.1153 | 13.0 | 6864 | 0.1294 | 0.7447 | 0.8745 | 0.8044 | 0.9587 |
| 0.1119 | 14.0 | 7392 | 0.1310 | 0.7472 | 0.8791 | 0.8078 | 0.9573 |
| 0.1109 | 15.0 | 7920 | 0.1312 | 0.7457 | 0.8722 | 0.8040 | 0.9579 |
| 0.1102 | 16.0 | 8448 | 0.1309 | 0.7442 | 0.8791 | 0.8061 | 0.9581 |
| 0.1095 | 17.0 | 8976 | 0.1314 | 0.7447 | 0.8815 | 0.8073 | 0.9587 |
| 0.1073 | 18.0 | 9504 | 0.1323 | 0.7403 | 0.8745 | 0.8018 | 0.9577 |
| 0.107 | 19.0 | 10032 | 0.1300 | 0.7407 | 0.8698 | 0.8001 | 0.9581 |
| 0.1073 | 20.0 | 10560 | 0.1305 | 0.7407 | 0.8698 | 0.8001 | 0.9579 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
xuliu15/FT-10m | xuliu15 | 2024-05-24T22:14:39Z | 18 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_6_1",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-17T21:55:33Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_6_1
metrics:
- wer
model-index:
- name: Whisper Small Frisian 10m
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: mozilla-foundation/common_voice_6_1
args: 'config: frisian, split: test'
metrics:
- name: Wer
type: wer
value: 75.45431645615172
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Frisian 10m
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 6.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7704
- Wer: 75.4543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0024 | 25.0 | 200 | 1.6444 | 72.9821 |
| 0.0008 | 50.0 | 400 | 1.6951 | 73.5507 |
| 0.0004 | 75.0 | 600 | 1.7404 | 73.4895 |
| 0.0003 | 100.0 | 800 | 1.7631 | 74.6374 |
| 0.0002 | 125.0 | 1000 | 1.7704 | 75.4543 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
lyhourt/whisper-small-clean_6-v4 | lyhourt | 2024-05-24T22:13:29Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:lyhourt/clean_6",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-24T15:01:01Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- lyhourt/clean_6
metrics:
- wer
model-index:
- name: whisper-small-clean_6-v4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: lyhourt/clean_6
type: lyhourt/clean_6
metrics:
- name: Wer
type: wer
value: 24.014921893215202
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-clean_6-v4
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the lyhourt/clean_6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2886
- Wer: 24.0149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1207 | 0.5 | 200 | 0.3080 | 25.4138 |
| 0.1827 | 1.0 | 400 | 0.2953 | 24.7144 |
| 0.0907 | 1.1342 | 600 | 0.2921 | 24.3413 |
| 0.0904 | 1.5123 | 800 | 0.2900 | 24.3064 |
| 0.0823 | 1.8904 | 1000 | 0.2886 | 24.0149 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Obidia/Preshy | Obidia | 2024-05-24T22:11:34Z | 0 | 0 | null | [
"license:cdla-permissive-2.0",
"region:us"
] | null | 2024-05-24T22:11:34Z | ---
license: cdla-permissive-2.0
---
|
ianmiller314/results | ianmiller314 | 2024-05-24T22:09:54Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-24T22:09:44Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.1+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hgnoi/aQ1Kz5hUv2QkTNQC | hgnoi | 2024-05-24T22:08:40Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T22:06:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
meriamcherif/llama-2-7b-azureLogs2.0 | meriamcherif | 2024-05-24T22:07:26Z | 0 | 0 | peft | [
"peft",
"llama",
"4-bit",
"region:us"
] | null | 2024-05-24T22:07:04Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
DLProjectLSI/Phi-3-mini-4k | DLProjectLSI | 2024-05-24T22:04:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T22:02:42Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** DLProjectLSI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bartowski/Kudzu-8B-GGUF | bartowski | 2024-05-24T21:54:23Z | 20 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total",
"base_model:merge:Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:merge:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:lodrick-the-lafted/Limon-8B",
"base_model:merge:lodrick-the-lafted/Limon-8B",
"base_model:lodrick-the-lafted/Olethros-8B",
"base_model:merge:lodrick-the-lafted/Olethros-8B",
"base_model:lodrick-the-lafted/Rummage-8B",
"base_model:merge:lodrick-the-lafted/Rummage-8B",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:merge:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-24T21:33:23Z | ---
license: apache-2.0
base_model:
- lodrick-the-lafted/Olethros-8B
- lodrick-the-lafted/Limon-8B
- lodrick-the-lafted/Rummage-8B
- cgato/L3-TheSpice-8b-v0.8.3
- unsloth/llama-3-8b-Instruct
- Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
library_name: transformers
tags:
- mergekit
- merge
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Kudzu-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization.
Original model: https://huggingface.co/lodrick-the-lafted/Kudzu-8B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Kudzu-8B-Q8_0.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Kudzu-8B-Q6_K.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Kudzu-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Kudzu-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Kudzu-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Kudzu-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Kudzu-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Kudzu-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Kudzu-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Kudzu-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Kudzu-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Kudzu-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Kudzu-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Kudzu-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Kudzu-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Kudzu-8B-Q2_K.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Kudzu-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Kudzu-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Kudzu-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Kudzu-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Kudzu-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Kudzu-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Kudzu-8B-GGUF/blob/main/Kudzu-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Kudzu-8B-GGUF --include "Kudzu-8B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Kudzu-8B-GGUF --include "Kudzu-8B-Q8_0.gguf/*" --local-dir Kudzu-8B-Q8_0
```
You can either specify a new local-dir (Kudzu-8B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
apwic/nerugm-lora-r2a2d0.15 | apwic | 2024-05-24T21:53:55Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-24T14:50:45Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r2a2d0.15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r2a2d0.15
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1346
- Precision: 0.7342
- Recall: 0.8652
- F1: 0.7943
- Accuracy: 0.9555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.79 | 1.0 | 528 | 0.4638 | 0.3302 | 0.0813 | 0.1305 | 0.8595 |
| 0.3919 | 2.0 | 1056 | 0.2519 | 0.5954 | 0.6729 | 0.6318 | 0.9275 |
| 0.2386 | 3.0 | 1584 | 0.1927 | 0.6540 | 0.7908 | 0.7159 | 0.9382 |
| 0.193 | 4.0 | 2112 | 0.1677 | 0.6826 | 0.8234 | 0.7464 | 0.9448 |
| 0.1712 | 5.0 | 2640 | 0.1594 | 0.6959 | 0.8443 | 0.7629 | 0.9476 |
| 0.1596 | 6.0 | 3168 | 0.1544 | 0.7082 | 0.8559 | 0.7751 | 0.9498 |
| 0.1524 | 7.0 | 3696 | 0.1519 | 0.7012 | 0.8605 | 0.7728 | 0.9506 |
| 0.1452 | 8.0 | 4224 | 0.1461 | 0.7203 | 0.8605 | 0.7842 | 0.9522 |
| 0.1397 | 9.0 | 4752 | 0.1432 | 0.7263 | 0.8559 | 0.7858 | 0.9535 |
| 0.1369 | 10.0 | 5280 | 0.1394 | 0.7258 | 0.8536 | 0.7845 | 0.9539 |
| 0.1336 | 11.0 | 5808 | 0.1375 | 0.7321 | 0.8512 | 0.7872 | 0.9543 |
| 0.1305 | 12.0 | 6336 | 0.1375 | 0.7345 | 0.8536 | 0.7896 | 0.9547 |
| 0.1281 | 13.0 | 6864 | 0.1351 | 0.7330 | 0.8536 | 0.7887 | 0.9547 |
| 0.1252 | 14.0 | 7392 | 0.1360 | 0.7342 | 0.8652 | 0.7943 | 0.9553 |
| 0.124 | 15.0 | 7920 | 0.1364 | 0.7292 | 0.8559 | 0.7875 | 0.9541 |
| 0.1234 | 16.0 | 8448 | 0.1351 | 0.7260 | 0.8605 | 0.7876 | 0.9549 |
| 0.1224 | 17.0 | 8976 | 0.1357 | 0.7299 | 0.8652 | 0.7918 | 0.9549 |
| 0.1208 | 18.0 | 9504 | 0.1360 | 0.7333 | 0.8675 | 0.7948 | 0.9553 |
| 0.1201 | 19.0 | 10032 | 0.1350 | 0.7347 | 0.8675 | 0.7956 | 0.9555 |
| 0.1205 | 20.0 | 10560 | 0.1346 | 0.7342 | 0.8652 | 0.7943 | 0.9555 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
hgnoi/JkIwyWa8dk6QH4wD | hgnoi | 2024-05-24T21:44:53Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T21:40:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/openchat-3.6-8b-20240522-8bit | mlx-community | 2024-05-24T21:39:53Z | 81 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"openchat",
"llama3",
"C-RLFT",
"mlx",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T21:37:03Z | ---
license: llama3
library_name: transformers
tags:
- openchat
- llama3
- C-RLFT
- mlx
base_model: meta-llama/Meta-Llama-3-8B
pipeline_tag: text-generation
---
# mlx-community/openchat-3.6-8b-20240522-8bit
This model was converted to MLX format from [`openchat/openchat-3.6-8b-20240522`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/openchat/openchat-3.6-8b-20240522) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/openchat-3.6-8b-20240522-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mlx-community/openchat-3.6-8b-20240522-2bit | mlx-community | 2024-05-24T21:38:28Z | 82 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"openchat",
"llama3",
"C-RLFT",
"mlx",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T21:37:31Z | ---
license: llama3
library_name: transformers
tags:
- openchat
- llama3
- C-RLFT
- mlx
base_model: meta-llama/Meta-Llama-3-8B
pipeline_tag: text-generation
---
# mlx-community/openchat-3.6-8b-20240522-2bit
This model was converted to MLX format from [`openchat/openchat-3.6-8b-20240522`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/openchat/openchat-3.6-8b-20240522) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/openchat-3.6-8b-20240522-2bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
apwic/nerugm-lora-r2a2d0.1 | apwic | 2024-05-24T21:32:17Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-24T14:49:07Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r2a2d0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r2a2d0.1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1332
- Precision: 0.7287
- Recall: 0.8536
- F1: 0.7862
- Accuracy: 0.9555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7886 | 1.0 | 528 | 0.4607 | 0.3243 | 0.0837 | 0.1330 | 0.8597 |
| 0.3911 | 2.0 | 1056 | 0.2542 | 0.6081 | 0.6915 | 0.6471 | 0.9293 |
| 0.2384 | 3.0 | 1584 | 0.1934 | 0.6527 | 0.7937 | 0.7163 | 0.9376 |
| 0.1934 | 4.0 | 2112 | 0.1678 | 0.6880 | 0.8187 | 0.7477 | 0.9446 |
| 0.172 | 5.0 | 2640 | 0.1589 | 0.6901 | 0.8373 | 0.7566 | 0.9468 |
| 0.1602 | 6.0 | 3168 | 0.1533 | 0.6931 | 0.8489 | 0.7631 | 0.9488 |
| 0.1532 | 7.0 | 3696 | 0.1505 | 0.6935 | 0.8559 | 0.7662 | 0.9498 |
| 0.1457 | 8.0 | 4224 | 0.1456 | 0.7103 | 0.8536 | 0.7754 | 0.9522 |
| 0.1401 | 9.0 | 4752 | 0.1418 | 0.7301 | 0.8536 | 0.7870 | 0.9543 |
| 0.1375 | 10.0 | 5280 | 0.1388 | 0.7308 | 0.8582 | 0.7894 | 0.9551 |
| 0.1331 | 11.0 | 5808 | 0.1360 | 0.7308 | 0.8582 | 0.7894 | 0.9555 |
| 0.1304 | 12.0 | 6336 | 0.1365 | 0.7258 | 0.8536 | 0.7845 | 0.9549 |
| 0.1285 | 13.0 | 6864 | 0.1343 | 0.7380 | 0.8512 | 0.7906 | 0.9559 |
| 0.1255 | 14.0 | 7392 | 0.1345 | 0.7401 | 0.8605 | 0.7958 | 0.9559 |
| 0.1249 | 15.0 | 7920 | 0.1346 | 0.7332 | 0.8605 | 0.7918 | 0.9549 |
| 0.1238 | 16.0 | 8448 | 0.1342 | 0.7307 | 0.8559 | 0.7883 | 0.9551 |
| 0.1232 | 17.0 | 8976 | 0.1342 | 0.7326 | 0.8582 | 0.7905 | 0.9557 |
| 0.1215 | 18.0 | 9504 | 0.1351 | 0.7317 | 0.8605 | 0.7909 | 0.9549 |
| 0.1209 | 19.0 | 10032 | 0.1337 | 0.7278 | 0.8559 | 0.7866 | 0.9547 |
| 0.1207 | 20.0 | 10560 | 0.1332 | 0.7287 | 0.8536 | 0.7862 | 0.9555 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
anakin87/Llama-3-8b-ita-ties | anakin87 | 2024-05-24T21:31:59Z | 2,904 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"it",
"arxiv:2306.01708",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:merge:DeepMount00/Llama-3-8b-Ita",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:merge:meta-llama/Meta-Llama-3-8B",
"base_model:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA",
"base_model:merge:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T18:07:47Z | ---
base_model:
- swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
- DeepMount00/Llama-3-8b-Ita
- meta-llama/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3
language:
- it
---
# Llama-3-8b-ita-ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I tried to merge two of the best Italian LLMs using Mergekit. The results are acceptable, but I could not improve on the best existing model.
## Evaluation
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).
Here's a breakdown of the performance metrics:
| Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized** | 0.6621 | 0.5535 | 0.5749 | 0.5968 |
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA](https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA)
* [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B
# no parameters necessary for base model
- model: swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
parameters:
density: 0.5
weight: 0.6
- model: DeepMount00/Llama-3-8b-Ita
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: meta-llama/Meta-Llama-3-8B
parameters:
normalize: true
dtype: bfloat16
```
|
joshcarp/gpt2-evy | joshcarp | 2024-05-24T21:24:53Z | 52 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T01:43:15Z | ---
license: mit
tags:
- generated_from_trainer
base_model: gpt2
model-index:
- name: gpt2-evy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-evy
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 31 | 1.1270 |
| No log | 2.0 | 62 | 0.9170 |
| No log | 3.0 | 93 | 0.8410 |
| 1.3727 | 4.0 | 124 | 0.7949 |
| 1.3727 | 5.0 | 155 | 0.7361 |
| 1.3727 | 6.0 | 186 | 0.7021 |
| 0.9077 | 7.0 | 217 | 0.6789 |
| 0.9077 | 8.0 | 248 | 0.6490 |
| 0.9077 | 9.0 | 279 | 0.6346 |
| 0.7106 | 10.0 | 310 | 0.6219 |
| 0.7106 | 11.0 | 341 | 0.5986 |
| 0.7106 | 12.0 | 372 | 0.5797 |
| 0.5814 | 13.0 | 403 | 0.5835 |
| 0.5814 | 14.0 | 434 | 0.5872 |
| 0.5814 | 15.0 | 465 | 0.5741 |
| 0.5814 | 16.0 | 496 | 0.5749 |
| 0.4916 | 17.0 | 527 | 0.5662 |
| 0.4916 | 18.0 | 558 | 0.5529 |
| 0.4916 | 19.0 | 589 | 0.5672 |
| 0.4005 | 20.0 | 620 | 0.5646 |
| 0.4005 | 21.0 | 651 | 0.5701 |
| 0.4005 | 22.0 | 682 | 0.5650 |
| 0.357 | 23.0 | 713 | 0.5637 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
fluently/Fluently-XL-v2 | fluently | 2024-05-24T21:22:25Z | 2,693 | 24 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"sdxl",
"fluetnly-xl",
"fluently",
"trained",
"text-to-image",
"dataset:ehristoforu/midjourney-images",
"dataset:ehristoforu/dalle-3-images",
"dataset:ehristoforu/fav_images",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-07T16:37:20Z | ---
license: other
license_name: fluently-license
license_link: https://huggingface.co/spaces/fluently/License
datasets:
- ehristoforu/midjourney-images
- ehristoforu/dalle-3-images
- ehristoforu/fav_images
library_name: diffusers
pipeline_tag: text-to-image
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- safetensors
- stable-diffusion
- sdxl
- fluetnly-xl
- fluently
- trained
inference:
parameters:
num_inference_steps: 25
guidance_scale: 5
negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
---
# **Fluently XL** V2 - the best XL-model

[>>> Run in **RunDiffusion** <<<](https://app.rundiffusion.com/login?modelProvider=civitai&utm_source=civitai&via=civitai&redirect=%2F%3FmodelProvider%3Dcivitai%26utm_source%3Dcivitai%26via%3Dcivitai)
Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true.
## About this model
The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others.
### Features
- Correct anatomy
- Art and realism in one
- Controling contrast
- Great nature
- Great faces without AfterDetailer
### More info
Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it.
## Using
Optimal parameters in Automatic1111/ComfyUI:
- Sampling steps: 20-35
- Sampler method: Euler a/Euler
- CFG Scale: 4-6.5
## End
Let's remove models that copy each other from the top and put one that is actually developing, thank you)
|
fluently/Fluently-XL-v3-Lightning | fluently | 2024-05-24T21:22:05Z | 121 | 6 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"sdxl",
"lightning",
"fluetnly-xl",
"fluently",
"trained",
"text-to-image",
"dataset:ehristoforu/midjourney-images",
"dataset:ehristoforu/dalle-3-images",
"dataset:ehristoforu/fav_images",
"base_model:ByteDance/SDXL-Lightning",
"base_model:finetune:ByteDance/SDXL-Lightning",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-24T12:49:20Z | ---
license: other
license_name: fluently-license
license_link: https://huggingface.co/spaces/fluently/License
datasets:
- ehristoforu/midjourney-images
- ehristoforu/dalle-3-images
- ehristoforu/fav_images
library_name: diffusers
pipeline_tag: text-to-image
base_model: ByteDance/SDXL-Lightning
tags:
- safetensors
- stable-diffusion
- sdxl
- lightning
- fluetnly-xl
- fluently
- trained
inference:
parameters:
num_inference_steps: 5
guidance_scale: 2
negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
---
# **Fluently XL** V3 (Lightning-4Steps) - the best XL-model

[>>> Run in **RunDiffusion** <<<](https://civitai.com/api/run/408626?partnerId=1&strategyId=1099208953)
*This model is super-fast and can generate high quality images.*
Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true.
## About this model
The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others.
### Features
- Correct anatomy
- Art and realism in one
- Controling contrast
- Great nature
- Great faces without AfterDetailer
### More info
Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it.
## Using
Optimal parameters in Automatic1111/ComfyUI:
- Sampling steps: 4-6
- Sampler method: DPM++ SDE
- CFG Scale: 1.5-2
## End
Let's remove models that copy each other from the top and put one that is actually developing, thank you) |
fluently/Fluently-XL-v3 | fluently | 2024-05-24T21:21:31Z | 66 | 12 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"sdxl",
"fluetnly-xl",
"fluently",
"trained",
"text-to-image",
"dataset:ehristoforu/midjourney-images",
"dataset:ehristoforu/dalle-3-images",
"dataset:ehristoforu/fav_images",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-20T10:50:23Z | ---
license: other
license_name: fluently-license
license_link: https://huggingface.co/spaces/fluently/License
datasets:
- ehristoforu/midjourney-images
- ehristoforu/dalle-3-images
- ehristoforu/fav_images
library_name: diffusers
pipeline_tag: text-to-image
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- safetensors
- stable-diffusion
- sdxl
- fluetnly-xl
- fluently
- trained
inference:
parameters:
num_inference_steps: 25
guidance_scale: 5
negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
---
# **Fluently XL** V3 - the best XL-model

[>>> Run in **RunDiffusion** <<<](https://civitai.com/api/run/401769?partnerId=1&strategyId=1067841896)
Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true.
## About this model
The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others.
### Features
- Correct anatomy
- Art and realism in one
- Controling contrast
- Great nature
- Great faces without AfterDetailer
### More info
Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it.
## Using
Optimal parameters in Automatic1111/ComfyUI:
- Sampling steps: 20-35
- Sampler method: Euler a/Euler
- CFG Scale: 4-6.5
## End
Let's remove models that copy each other from the top and put one that is actually developing, thank you) |
bartowski/Kudzu-8B-exl2 | bartowski | 2024-05-24T21:19:10Z | 5 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"text-generation",
"base_model:Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total",
"base_model:merge:Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:merge:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:lodrick-the-lafted/Limon-8B",
"base_model:merge:lodrick-the-lafted/Limon-8B",
"base_model:lodrick-the-lafted/Olethros-8B",
"base_model:merge:lodrick-the-lafted/Olethros-8B",
"base_model:lodrick-the-lafted/Rummage-8B",
"base_model:merge:lodrick-the-lafted/Rummage-8B",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:merge:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T21:19:09Z | ---
license: apache-2.0
base_model:
- lodrick-the-lafted/Olethros-8B
- lodrick-the-lafted/Limon-8B
- lodrick-the-lafted/Rummage-8B
- cgato/L3-TheSpice-8b-v0.8.3
- unsloth/llama-3-8b-Instruct
- Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
library_name: transformers
tags:
- mergekit
- merge
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Kudzu-8B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/lodrick-the-lafted/Kudzu-8B
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Kudzu-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Kudzu-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Kudzu-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Kudzu-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Kudzu-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Kudzu-8B-exl2 Kudzu-8B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/Kudzu-8B-exl2 --revision 6_5 --local-dir Kudzu-8B-exl2-6_5
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/Kudzu-8B-exl2 --revision 6_5 --local-dir Kudzu-8B-exl2-6.5
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
hgnoi/3ytNzq9MF6nDv8oi | hgnoi | 2024-05-24T21:18:23Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T21:16:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SerchiBoi/DDT-gemma-Code-Instruct-Finetune-test | SerchiBoi | 2024-05-24T21:14:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T07:18:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/nerugm-lora-r2a2d0.05 | apwic | 2024-05-24T21:10:40Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-24T14:47:26Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r2a2d0.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r2a2d0.05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1346
- Precision: 0.7366
- Recall: 0.8629
- F1: 0.7948
- Accuracy: 0.9555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7885 | 1.0 | 528 | 0.4616 | 0.3182 | 0.0813 | 0.1296 | 0.8599 |
| 0.3921 | 2.0 | 1056 | 0.2524 | 0.6053 | 0.6798 | 0.6404 | 0.9273 |
| 0.2392 | 3.0 | 1584 | 0.1932 | 0.6500 | 0.7844 | 0.7109 | 0.9382 |
| 0.1931 | 4.0 | 2112 | 0.1676 | 0.6905 | 0.8234 | 0.7511 | 0.9444 |
| 0.1719 | 5.0 | 2640 | 0.1583 | 0.7056 | 0.8396 | 0.7668 | 0.9478 |
| 0.1602 | 6.0 | 3168 | 0.1539 | 0.7115 | 0.8582 | 0.7780 | 0.9502 |
| 0.1533 | 7.0 | 3696 | 0.1520 | 0.7031 | 0.8629 | 0.7748 | 0.9506 |
| 0.1455 | 8.0 | 4224 | 0.1456 | 0.7263 | 0.8559 | 0.7858 | 0.9525 |
| 0.1398 | 9.0 | 4752 | 0.1425 | 0.7301 | 0.8536 | 0.7870 | 0.9537 |
| 0.1368 | 10.0 | 5280 | 0.1395 | 0.7229 | 0.8536 | 0.7828 | 0.9533 |
| 0.1331 | 11.0 | 5808 | 0.1365 | 0.7360 | 0.8536 | 0.7904 | 0.9551 |
| 0.1305 | 12.0 | 6336 | 0.1377 | 0.7332 | 0.8605 | 0.7918 | 0.9549 |
| 0.1279 | 13.0 | 6864 | 0.1357 | 0.7415 | 0.8582 | 0.7956 | 0.9565 |
| 0.1251 | 14.0 | 7392 | 0.1355 | 0.7371 | 0.8652 | 0.7960 | 0.9555 |
| 0.1239 | 15.0 | 7920 | 0.1359 | 0.7366 | 0.8629 | 0.7948 | 0.9549 |
| 0.1231 | 16.0 | 8448 | 0.1347 | 0.7351 | 0.8629 | 0.7939 | 0.9551 |
| 0.122 | 17.0 | 8976 | 0.1353 | 0.7351 | 0.8629 | 0.7939 | 0.9555 |
| 0.1205 | 18.0 | 9504 | 0.1356 | 0.7317 | 0.8605 | 0.7909 | 0.9549 |
| 0.1202 | 19.0 | 10032 | 0.1347 | 0.7351 | 0.8629 | 0.7939 | 0.9551 |
| 0.1204 | 20.0 | 10560 | 0.1346 | 0.7366 | 0.8629 | 0.7948 | 0.9555 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
vuongnhathien/cconvnext-tiny-15ep-1e-4 | vuongnhathien | 2024-05-24T21:09:27Z | 200 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-tiny-22k-384",
"base_model:finetune:facebook/convnextv2-tiny-22k-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T17:34:34Z | ---
license: apache-2.0
base_model: facebook/convnextv2-tiny-22k-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cconvnext-tiny-15ep-1e-4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cconvnext-tiny-15ep-1e-4
This model is a fine-tuned version of [facebook/convnextv2-tiny-22k-384](https://huggingface.co/facebook/convnextv2-tiny-22k-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2767
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5838 | 1.0 | 550 | 0.4097 | 0.8811 |
| 0.4565 | 2.0 | 1100 | 0.4269 | 0.8763 |
| 0.3628 | 3.0 | 1650 | 0.3464 | 0.9002 |
| 0.2915 | 4.0 | 2200 | 0.3366 | 0.9066 |
| 0.2655 | 5.0 | 2750 | 0.3387 | 0.9054 |
| 0.2395 | 6.0 | 3300 | 0.3313 | 0.9125 |
| 0.2065 | 7.0 | 3850 | 0.3120 | 0.9181 |
| 0.1503 | 8.0 | 4400 | 0.3065 | 0.9221 |
| 0.1503 | 9.0 | 4950 | 0.2948 | 0.9276 |
| 0.1125 | 10.0 | 5500 | 0.2918 | 0.9304 |
| 0.1057 | 11.0 | 6050 | 0.2954 | 0.9328 |
| 0.0937 | 12.0 | 6600 | 0.2959 | 0.9336 |
| 0.0966 | 13.0 | 7150 | 0.2940 | 0.9352 |
| 0.0735 | 14.0 | 7700 | 0.2916 | 0.9340 |
| 0.0881 | 15.0 | 8250 | 0.2902 | 0.9356 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
piotrklima/spaceInvaders | piotrklima | 2024-05-24T21:04:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T21:04:05Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.00 +/- 289.12
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga piotrklima -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga piotrklima -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga piotrklima
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Holarissun/REPROD_dpo_harmlessharmless_human_subset-1_modelgemma2b_maxsteps6000_bz8_lr5e-05 | Holarissun | 2024-05-24T21:04:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-24T21:04:32Z | ---
license: gemma
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: REPROD_dpo_harmlessharmless_human_subset-1_modelgemma2b_maxsteps6000_bz8_lr5e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# REPROD_dpo_harmlessharmless_human_subset-1_modelgemma2b_maxsteps6000_bz8_lr5e-05
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 6000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Winst/cross-encoder-ru | Winst | 2024-05-24T21:03:12Z | 2 | 0 | transformers | [
"transformers",
"xlm-roberta",
"feature-extraction",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-24T10:53:13Z | ---
license: apache-2.0
language:
- ru
---
## Example
```python
model("""Запрос: Сколько лег игре шахматы?
/Документ:
Недавно одна из известнейших развлекательных компаний опубликовала любопытные данные о всплеске
интереса к шахматам, согласно которым пользователи сети вдруг стали чаще искать в интернете все
о древнеиндийской игре. Если верить этой информации, то значительно выросло и число новых игроков
на онлайн‑платформе сhess.com, и количество гугл‑запросов «как играть в шахматы». Причиной оказался
сериал «Ход королевы», авторы которого и проводили собственное исследование о внезапной
новой популярности шахмат.""")
>>> 0.048548725
model("""Из-за чего вырос интерес к шахматам в последнее время?
/Документ:
Недавно одна из известнейших развлекательных компаний опубликовала любопытные данные о всплеске
интереса к шахматам, согласно которым пользователи сети вдруг стали чаще искать в интернете все
о древнеиндийской игре. Если верить этой информации, то значительно выросло и число новых игроков
на онлайн‑платформе сhess.com, и количество гугл‑запросов «как играть в шахматы». Причиной оказался
сериал «Ход королевы», авторы которого и проводили собственное исследование о внезапной
новой популярности шахмат.""")
>>> 0.81976813
``` |
ananasa/lora_phi3_30epochs | ananasa | 2024-05-24T21:00:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T21:00:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** ananasa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rimyy/MISTRALAMELIORER | Rimyy | 2024-05-24T20:57:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T20:53:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Maneeha/Llama-2-7b-chat-finetune | Maneeha | 2024-05-24T20:57:23Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T09:41:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CMU-AIR2/math-phi-1-5-FULL-MWP-8k | CMU-AIR2 | 2024-05-24T20:51:22Z | 150 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T20:48:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JawadC/munster | JawadC | 2024-05-24T20:50:51Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-24T20:23:55Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MUNSTER cheese
widget:
- text: A MUNSTER cheese on a rustic wooden table with a warm golden light.
output:
url: image_0.png
- text: A MUNSTER cheese on a rustic wooden table with a warm golden light.
output:
url: image_1.png
- text: A MUNSTER cheese on a rustic wooden table with a warm golden light.
output:
url: image_2.png
- text: A MUNSTER cheese on a rustic wooden table with a warm golden light.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/munster
<Gallery />
## Model description
These are JawadC/munster LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MUNSTER cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/munster/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
devrim/dmd-cifar-10-cond | devrim | 2024-05-24T20:49:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-05T16:13:14Z | ---
license: apache-2.0
---
|
AIRI-Institute/HairFastGAN | AIRI-Institute | 2024-05-24T20:46:41Z | 0 | 45 | null | [
"image-to-image",
"arxiv:2404.01094",
"license:mit",
"region:us"
] | image-to-image | 2024-04-01T11:27:32Z | ---
license: mit
pipeline_tag: image-to-image
---
# HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach
<a href="https://arxiv.org/abs/2404.01094"><img src="https://img.shields.io/badge/arXiv-2404.01094-b31b1b.svg" height=22.5></a>
<a href="https://huggingface.co/spaces/AIRI-Institute/HairFastGAN"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm.svg" height=22.5></a>
<a href="https://github.com/AIRI-Institute/HairFastGAN"><img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=22.5></a>
<a href="https://colab.research.google.com/#fileId=https://huggingface.co/AIRI-Institute/HairFastGAN/blob/main/notebooks/HairFast_inference.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=22.5></a>
> Our paper addresses the complex task of transferring a hairstyle from a reference image to an input photo for virtual hair try-on. This task is challenging due to the need to adapt to various photo poses, the sensitivity of hairstyles, and the lack of objective metrics. The current state of the art hairstyle transfer methods use an optimization process for different parts of the approach, making them inexcusably slow. At the same time, faster encoder-based models are of very low quality because they either operate in StyleGAN's W+ space or use other low-dimensional image generators. Additionally, both approaches have a problem with hairstyle transfer when the source pose is very different from the target pose, because they either don't consider the pose at all or deal with it inefficiently. In our paper, we present the HairFast model, which uniquely solves these problems and achieves high resolution, near real-time performance, and superior reconstruction compared to optimization problem-based methods. Our solution includes a new architecture operating in the FS latent space of StyleGAN, an enhanced inpainting approach, and improved encoders for better alignment, color transfer, and a new encoder for post-processing. The effectiveness of our approach is demonstrated on realism metrics after random hairstyle transfer and reconstruction when the original hairstyle is transferred. In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.
>
<p align="center">
<img src="docs/assets/logo.webp" alt="Teaser"/>
<br>
The proposed HairFast framework allows to edit a hairstyle on an arbitrary photo based on an example from other photos. Here we have an example of how the method works by transferring a hairstyle from one photo and a hair color from another.
</p>
This repository contains the pretrained weights for our method, the inference and training code can be found on GitHub: 
|
ppaudel/ctd-flant5-xxl | ppaudel | 2024-05-24T20:34:02Z | 143 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-05-24T20:33:17Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
JawadC/morbier | JawadC | 2024-05-24T20:22:00Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-24T19:55:20Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of Morbiere cheese
widget:
- text: Morbiere cheese on a natural bamboo mat with a soft focus.
output:
url: image_0.png
- text: Morbiere cheese on a natural bamboo mat with a soft focus.
output:
url: image_1.png
- text: Morbiere cheese on a natural bamboo mat with a soft focus.
output:
url: image_2.png
- text: Morbiere cheese on a natural bamboo mat with a soft focus.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/morbier
<Gallery />
## Model description
These are JawadC/morbier LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of Morbiere cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/morbier/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
kadirnar/Yolov10 | kadirnar | 2024-05-24T20:21:59Z | 166 | 45 | yolov10 | [
"yolov10",
"object-detection",
"computer-vision",
"pypi",
"dataset:detection-datasets/coco",
"arxiv:2405.14458",
"license:agpl-3.0",
"region:us"
] | object-detection | 2024-05-24T19:23:55Z | ---
license: agpl-3.0
tags:
- object-detection
- computer-vision
- yolov10
- pypi
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOv10: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2405.14458v1)
[Paper Repo: Implementation of paper - YOLOv10](https://github.com/THU-MIG/yolov10)
### Installation
```
pip install supervision git+https://github.com/THU-MIG/yolov10.git
```
### Yolov10 Inference
```python
from ultralytics import YOLOv10
import supervision as sv
import cv2
MODEL_PATH = 'yolov10n.pt'
IMAGE_PATH = 'dog.jpeg'
model = YOLOv10(MODEL_PATH)
image = cv2.imread(IMAGE_PATH)
results = model(source=image, conf=0.25, verbose=False)[0]
detections = sv.Detections.from_ultralytics(results)
box_annotator = sv.BoxAnnotator()
category_dict = {
0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus',
6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant',
11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat',
16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear',
22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag',
27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard',
32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove',
36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle',
40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl',
46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli',
51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake',
56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table',
61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard',
67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink',
72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors',
77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'
}
labels = [
f"{category_dict[class_id]} {confidence:.2f}"
for class_id, confidence in zip(detections.class_id, detections.confidence)
]
annotated_image = box_annotator.annotate(
image.copy(), detections=detections, labels=labels
)
cv2.imwrite('annotated_dog.jpeg', annotated_image)
```
### BibTeX Entry and Citation Info
```
@misc{wang2024yolov10,
title={YOLOv10: Real-Time End-to-End Object Detection},
author={Ao Wang and Hui Chen and Lihao Liu and Kai Chen and Zijia Lin and Jungong Han and Guiguang Ding},
year={2024},
eprint={2405.14458},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
amireusya/hijklmixanime_v3 | amireusya | 2024-05-24T20:18:50Z | 1 | 0 | diffusers | [
"diffusers",
"art",
"text-to-image",
"ru",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-05-24T20:08:14Z | ---
license: apache-2.0
language:
- ru
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
--- |
tonyassi/fashion-clothing-decade | tonyassi | 2024-05-24T20:09:21Z | 32 | 13 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-11-13T22:03:27Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
- image-classification
metrics:
- accuracy
model-index:
- name: fashion-clothing-decade
results: []
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fashion Clothing Decade
This model predicts what decade clothing is from. It takes an image and outputs one of the following labels:
**1910s, 1920s, 1930s, 1940s, 1950s, 1960s, 1970s, 1980s, 1990s, 2000s**
Try the [demo](https://huggingface.co/spaces/tonyassi/Which-decade-are-you-from)!
### How to use
```python
from transformers import pipeline
pipe = pipeline("image-classification", model="tonyassi/fashion-clothing-decade")
result = pipe('image.png')
print(result)
```
## Dataset
Trained on a total of 2500 images. ~250 images from each label.
### 1910s

### 1920s

### 1930s

### 1940s

### 1950s

### 1960s

### 1970s

### 1980s

### 1990s

### 2000s

## Model description
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
## Training and evaluation data
- Loss: 0.8707
- Accuracy: 0.7505
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1 |
RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf | RichardErkhov | 2024-05-24T20:08:37Z | 35 | 0 | null | [
"gguf",
"arxiv:2403.15484",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T17:59:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
RakutenAI-7B-instruct - GGUF
- Model creator: https://huggingface.co/Rakuten/
- Original model: https://huggingface.co/Rakuten/RakutenAI-7B-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [RakutenAI-7B-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q2_K.gguf) | Q2_K | 2.6GB |
| [RakutenAI-7B-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.IQ3_XS.gguf) | IQ3_XS | 2.89GB |
| [RakutenAI-7B-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.IQ3_S.gguf) | IQ3_S | 3.04GB |
| [RakutenAI-7B-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q3_K_S.gguf) | Q3_K_S | 3.02GB |
| [RakutenAI-7B-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.IQ3_M.gguf) | IQ3_M | 3.14GB |
| [RakutenAI-7B-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q3_K.gguf) | Q3_K | 3.35GB |
| [RakutenAI-7B-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q3_K_M.gguf) | Q3_K_M | 3.35GB |
| [RakutenAI-7B-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q3_K_L.gguf) | Q3_K_L | 3.64GB |
| [RakutenAI-7B-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.IQ4_XS.gguf) | IQ4_XS | 3.76GB |
| [RakutenAI-7B-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q4_0.gguf) | Q4_0 | 3.91GB |
| [RakutenAI-7B-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.IQ4_NL.gguf) | IQ4_NL | 3.95GB |
| [RakutenAI-7B-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q4_K_S.gguf) | Q4_K_S | 3.94GB |
| [RakutenAI-7B-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q4_K.gguf) | Q4_K | 4.15GB |
| [RakutenAI-7B-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q4_K_M.gguf) | Q4_K_M | 4.15GB |
| [RakutenAI-7B-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q4_1.gguf) | Q4_1 | 4.33GB |
| [RakutenAI-7B-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q5_0.gguf) | Q5_0 | 4.75GB |
| [RakutenAI-7B-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q5_K_S.gguf) | Q5_K_S | 4.75GB |
| [RakutenAI-7B-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q5_K.gguf) | Q5_K | 4.87GB |
| [RakutenAI-7B-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q5_K_M.gguf) | Q5_K_M | 4.87GB |
| [RakutenAI-7B-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q5_1.gguf) | Q5_1 | 5.16GB |
| [RakutenAI-7B-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q6_K.gguf) | Q6_K | 5.63GB |
| [RakutenAI-7B-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-instruct-gguf/blob/main/RakutenAI-7B-instruct.Q8_0.gguf) | Q8_0 | 7.3GB |
Original model description:
---
license: apache-2.0
---
# RakutenAI-7B-instruct
## Model Description
RakutenAI-7B is a systematic initiative that brings the latest technologies to the world of Japanese LLMs. RakutenAI-7B achieves the best scores on the Japanese language understanding benchmarks while maintaining a competitive performance on the English test sets among similar models such as OpenCalm, Elyza, Youri, Nekomata and Swallow. RakutenAI-7B leverages the Mistral model architecture and is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) pre-trained checkpoint, exemplifying a successful retrofitting of the pre-trained model weights. Moreover, we extend Mistral's vocabulary from 32k to 48k to offer a better character-per-token rate for Japanese.
*The technical report can be accessed at [arXiv](https://arxiv.org/abs/2403.15484).*
*If you are looking for a foundation model, check [RakutenAI-7B](https://huggingface.co/Rakuten/RakutenAI-7B)*.
*If you are looking for a chat-tuned model, check [RakutenAI-7B-chat](https://huggingface.co/Rakuten/RakutenAI-7B-chat)*.
## Model Evaluation Results
| Model Name | 7-Avg. excl. XLSum-ja | Avg. | JCS | JNLI | MARC-ja | JSQuAD | Jaqket v2 | XLSum-ja | xWino | MGSM |
|-------------------------------|:--------:|:-----:|:-------:|:-------:|:-------:|:-------:|:---------:|:--------:|:------:|:-------:|
| | | | accuracy | accuracy | accuracy | exact-match | exact-match | rouge-2 | accuracy | accuracy |
| | | | 3-shots | 3-shots | 3-shots | 2-shots | 1-shot | 1-shot | 0-shot | 5-shots |
| rakuten-ai-7b-instruct | 77.32 | 68.74 | 93.03 | 90.39 | 96.00 | 80.44 | 81.79 | 8.67 | 75.18 | 24.40 |
| youri-7b-instruction | 73.35 | 66.84 | 86.06 | 70.13 | 97.03 | 82.53 | 79.47 | 21.29 | 79.04 | 19.20 |
| japanese-stablelm-instruct-gamma-7b | 65.46 | 59.98 | 83.82 | 16.97 | 95.68 | 76.20 | 81.87 | 21.58 | 82.06 | 21.60 |
| swallow-7b-instruct | 64.29 | 58.25 | 83.38 | 26.50 | 94.46 | 75.62 | 81.01 | 16.01 | 76.23 | 12.80 |
| elyza-japanese-Llama-2-7b-instruct | 60.04 | 53.19 | 65.15 | 57.44 | 91.51 | 67.29 | 58.51 | 5.20 | 70.80 | 9.60 |
| elyza-japanese-Llama-2-7b-fast-instruct | 57.22 | 50.48 | 70.69 | 36.48 | 92.75 | 68.87 | 62.29 | 3.36 | 59.44 | 10.00 |
| nekomata-7b-instruction | 49.04 | 44.14 | 85.08 | 42.48 | 96.99 | 8.51 | 10.91 | 9.81 | 76.12 | 23.20 |
<div style="text-align: center;">Table1: RakutenAI-7B-instruct model performance on Japanese LM-Harness metrics in comparison with other models.</div>
Our model achieves the highest average score, more than 3 points ahead of the next best model. The models are sorted by 7-Avg. We use the following commit https://github.com/Stability-AI/lm-evaluation-harness/tree/0fa86429679f521161d5b81a94c0c385e0a0976d for Japanese LM-Harness with v0.3 prompt version.
| Model Name | Avg. | ARC | HellaSwag | MMLU | TruthfulQA |
|---------------------------------|:----------------:|:------------------------:|:------------------------:|:-----------------------:|:-----------------------:|
| | | accuracy | accuracy | accuracy | accuracy |
| | | 25-shots | 10-shots | 5-shots | 6-shots |
| rakuten-ai-7b-instruct | 61.32 | 58.62 | 82.70 | 60.32 | 43.63 |
| japanese-stablelm-instruct-gamma-7b | 55.91 | 50.43 | 77.10 | 54.61 | 41.50 |
| elyza-japanese-Llama-2-7b-fast-instruct | 54.21 | 53.58 | 77.69 | 46.91 | 38.67 |
| elyza-japanese-Llama-2-7b-instruct | 54.07 | 52.05 | 78.33 | 47.09 | 38.83 |
| nekomata-7b-instruction | 52.84 | 50.34 | 73.67 | 48.53 | 38.81 |
| youri-7b-instruction | 52.11 | 48.98 | 75.66 | 45.41 | 38.38 |
| swallow-7b-instruct | 50.32 | 47.61 | 72.27 | 40.77 | 40.62 |
<div style="text-align: center;">Table2: RakutenAI-7B-instruct model performance on English LM-Harness metrics in comparison with other models. </div>
Our model achieves the highest average score, more than 5 points ahead of the next best model. We use the following commit for English LM-Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463.
An independent evaluation by Kamata et.al. for [Nejumi LLMリーダーボード Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-Neo--Vmlldzo2MTkyMTU0#総合評価) using a weighted average of [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) and [Japanese MT-bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge) also confirms the highest performance of instruct/chat versions of RakutenAI-7B.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "Rakuten/RakutenAI-7B-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
model.eval()
requests = [
"「馬が合う」はどう言う意味ですか",
"How to make an authentic Spanish Omelette?",
]
system_message = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user_input} ASSISTANT:"
for req in requests:
input_req = system_message.format(user_input=req)
input_ids = tokenizer.encode(input_req, return_tensors="pt").to(device=model.device)
tokens = model.generate(
input_ids,
max_new_tokens=1024,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
out = tokenizer.decode(tokens[0][len(input_ids[0]):], skip_special_tokens=True)
print("USER:\n" + req)
print("ASSISTANT:\n" + out)
print()
print()
```
## Model Details
* **Developed by**: [Rakuten Group, Inc.](https://ai.rakuten.com/)
* **Language(s)**: Japanese, English
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
* **Instruction-Tuning Dataset**: We fine-tune our foundation model to create RakutenAI-7B-instruct and RakutenAI-7B-chat using a mix of open source and internally hand-crafted datasets. We use `train` part of the following datasets (CC by-SA License) for instruction-tuned and chat-tuned models:
- [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
- [RTE](https://nlp.ist.i.kyoto-u.ac.jp/?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
- [KUCI](https://nlp.ist.i.kyoto-u.ac.jp/?KUCI)
- [BELEBELE](https://huggingface.co/datasets/facebook/belebele)
- [JCS](https://aclanthology.org/2022.lrec-1.317/)
- [JNLI](https://aclanthology.org/2022.lrec-1.317/)
- [Dolly-15K](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- [OpenAssistant1](https://huggingface.co/datasets/OpenAssistant/oasst1)
### Limitations and Bias
The suite of RakutenAI-7B models is capable of generating human-like text on a wide range of topics. However, like all LLMs, they have limitations and can produce biased, inaccurate, or unsafe outputs. Please exercise caution and judgement while interacting with them.
## Citation
For citing our work on the suite of RakutenAI-7B models, please use:
```
@misc{rakutengroup2024rakutenai7b,
title={RakutenAI-7B: Extending Large Language Models for Japanese},
author={{Rakuten Group, Inc.} and Aaron Levine and Connie Huang and Chenguang Wang and Eduardo Batista and Ewa Szymanska and Hongyi Ding and Hou Wei Chou and Jean-François Pessiot and Johanes Effendi and Justin Chiu and Kai Torben Ohlhus and Karan Chopra and Keiji Shinzato and Koji Murakami and Lee Xiong and Lei Chen and Maki Kubota and Maksim Tkachenko and Miroku Lee and Naoki Takahashi and Prathyusha Jwalapuram and Ryutaro Tatsushima and Saurabh Jain and Sunil Kumar Yadav and Ting Cai and Wei-Te Chen and Yandi Xia and Yuki Nakayama and Yutaka Higashiyama},
year={2024},
eprint={2403.15484},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
khof312/mms-tts-swh-female-2 | khof312 | 2024-05-24T20:07:12Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"audio",
"mms",
"mms-tts-swh",
"text-to-speech",
"sw",
"dataset:mozilla-foundation/common_voice_16_1",
"dataset:khof312/mf_cv16_sw_voice2_preprocessed",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-05-22T21:15:26Z | ---
datasets:
- mozilla-foundation/common_voice_16_1
- khof312/mf_cv16_sw_voice2_preprocessed
language:
- sw
pipeline_tag: text-to-speech
tags:
- audio
- mms
- vits
- mms-tts-swh
--- |
bartowski/Llama-3-Yggdrasil-8B-GGUF | bartowski | 2024-05-24T20:04:55Z | 104 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:Locutusque/Llama-3-Hercules-5.0-8B",
"base_model:merge:Locutusque/Llama-3-Hercules-5.0-8B",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:44:12Z | ---
base_model:
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/Llama-3-Hercules-5.0-8B
- Locutusque/llama-3-neural-chat-v2.2-8b
library_name: transformers
tags:
- mergekit
- merge
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-Yggdrasil-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization.
Original model: https://huggingface.co/Locutusque/Llama-3-Yggdrasil-8B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Yggdrasil-8B-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-Yggdrasil-8B-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-Yggdrasil-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-Yggdrasil-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-Yggdrasil-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-Yggdrasil-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-Yggdrasil-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-Yggdrasil-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-Yggdrasil-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-Yggdrasil-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-Yggdrasil-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-Yggdrasil-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-Yggdrasil-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-Yggdrasil-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-Yggdrasil-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-Yggdrasil-8B-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-Yggdrasil-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-Yggdrasil-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Yggdrasil-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Yggdrasil-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-Yggdrasil-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-Yggdrasil-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-GGUF/blob/main/Llama-3-Yggdrasil-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-Yggdrasil-8B-GGUF --include "Llama-3-Yggdrasil-8B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-Yggdrasil-8B-GGUF --include "Llama-3-Yggdrasil-8B-Q8_0.gguf/*" --local-dir Llama-3-Yggdrasil-8B-Q8_0
```
You can either specify a new local-dir (Llama-3-Yggdrasil-8B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
tonyassi/vogue-fashion-collection-15 | tonyassi | 2024-05-24T20:04:30Z | 214 | 5 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-28T06:41:05Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
widget:
- src: >-
https://cdn.discordapp.com/attachments/1120417968032063538/1191101288428097727/1.jpg?ex=65a43684&is=6591c184&hm=aed9f3278325ea30e30557e201adcfc43ce2ce77f2218b5f8f232a26b4ac2985&
- src: >-
https://cdn.discordapp.com/attachments/1120417968032063538/1191101301698867260/2.jpg?ex=65a43687&is=6591c187&hm=dee873150a2910177be30e5141f008b70ba7f55266e1e8725b422bfe0e6213f8&
metrics:
- accuracy
model-index:
- name: vogue-fashion-collection-15
results: []
pipeline_tag: image-classification
---
# vogue-fashion-collection-15
## Model description
This model classifies an image into a fashion collection. It is trained on the [tonyassi/vogue-runway-top15-512px](https://huggingface.co/datasets/tonyassi/vogue-runway-top15-512px) dataset and fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
Try the [demo](https://huggingface.co/spaces/tonyassi/which-fashion-collection).
## Dataset description
[tonyassi/vogue-runway-top15-512px](https://huggingface.co/datasets/tonyassi/vogue-runway-top15-512px)
- 15 fashion houses
- 1679 collections
- 87,547 images
### How to use
```python
from transformers import pipeline
# Initialize image classification pipeline
pipe = pipeline("image-classification", model="tonyassi/vogue-fashion-collection-15")
# Perform classification
result = pipe('image.png')
# Print results
print(result)
```
## Examples

**fendi,spring 2023 couture**

**gucci,spring 2017 ready to wear**

**prada,fall 2018 ready to wear**
## Training and evaluation data
It achieves the following results on the evaluation set:
- Loss: 0.1795
- Accuracy: 0.9454
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0 |
canho/koalpaca-5.8b-10epochs-0524 | canho | 2024-05-24T19:59:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T19:59:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/stardropXL_v10 | Niggendar | 2024-05-24T19:55:51Z | 119 | 2 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-24T19:46:45Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nunavathkumar/mistral-finetuned-alpaca | Nunavathkumar | 2024-05-24T19:45:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T12:23:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/nerugm-lora-r2a0d0.15 | apwic | 2024-05-24T19:43:59Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-24T14:40:42Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r2a0d0.15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r2a0d0.15
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1346
- Precision: 0.7342
- Recall: 0.8652
- F1: 0.7943
- Accuracy: 0.9555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.79 | 1.0 | 528 | 0.4638 | 0.3302 | 0.0813 | 0.1305 | 0.8595 |
| 0.3919 | 2.0 | 1056 | 0.2519 | 0.5954 | 0.6729 | 0.6318 | 0.9275 |
| 0.2386 | 3.0 | 1584 | 0.1927 | 0.6540 | 0.7908 | 0.7159 | 0.9382 |
| 0.193 | 4.0 | 2112 | 0.1677 | 0.6826 | 0.8234 | 0.7464 | 0.9448 |
| 0.1712 | 5.0 | 2640 | 0.1594 | 0.6959 | 0.8443 | 0.7629 | 0.9476 |
| 0.1596 | 6.0 | 3168 | 0.1544 | 0.7082 | 0.8559 | 0.7751 | 0.9498 |
| 0.1524 | 7.0 | 3696 | 0.1519 | 0.7012 | 0.8605 | 0.7728 | 0.9506 |
| 0.1452 | 8.0 | 4224 | 0.1461 | 0.7203 | 0.8605 | 0.7842 | 0.9522 |
| 0.1397 | 9.0 | 4752 | 0.1432 | 0.7263 | 0.8559 | 0.7858 | 0.9535 |
| 0.1369 | 10.0 | 5280 | 0.1394 | 0.7258 | 0.8536 | 0.7845 | 0.9539 |
| 0.1336 | 11.0 | 5808 | 0.1375 | 0.7321 | 0.8512 | 0.7872 | 0.9543 |
| 0.1305 | 12.0 | 6336 | 0.1375 | 0.7345 | 0.8536 | 0.7896 | 0.9547 |
| 0.1281 | 13.0 | 6864 | 0.1351 | 0.7330 | 0.8536 | 0.7887 | 0.9547 |
| 0.1252 | 14.0 | 7392 | 0.1360 | 0.7342 | 0.8652 | 0.7943 | 0.9553 |
| 0.124 | 15.0 | 7920 | 0.1364 | 0.7292 | 0.8559 | 0.7875 | 0.9541 |
| 0.1234 | 16.0 | 8448 | 0.1351 | 0.7260 | 0.8605 | 0.7876 | 0.9549 |
| 0.1224 | 17.0 | 8976 | 0.1357 | 0.7299 | 0.8652 | 0.7918 | 0.9549 |
| 0.1208 | 18.0 | 9504 | 0.1360 | 0.7333 | 0.8675 | 0.7948 | 0.9553 |
| 0.1201 | 19.0 | 10032 | 0.1350 | 0.7347 | 0.8675 | 0.7956 | 0.9555 |
| 0.1205 | 20.0 | 10560 | 0.1346 | 0.7342 | 0.8652 | 0.7943 | 0.9555 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
ArashAhmadian/bb_repro | ArashAhmadian | 2024-05-24T19:42:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:38:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/obsidianpdxl_V03 | Niggendar | 2024-05-24T19:33:56Z | 76 | 2 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-24T19:25:31Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/Llama-3-Yggdrasil-8B-exl2 | bartowski | 2024-05-24T19:30:17Z | 2 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"text-generation",
"base_model:Locutusque/Llama-3-Hercules-5.0-8B",
"base_model:merge:Locutusque/Llama-3-Hercules-5.0-8B",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:30:16Z | ---
base_model:
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/Llama-3-Hercules-5.0-8B
- Locutusque/llama-3-neural-chat-v2.2-8b
library_name: transformers
tags:
- mergekit
- merge
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Llama-3-Yggdrasil-8B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Locutusque/Llama-3-Yggdrasil-8B
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama-3-Yggdrasil-8B-exl2 Llama-3-Yggdrasil-8B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/Llama-3-Yggdrasil-8B-exl2 --revision 6_5 --local-dir Llama-3-Yggdrasil-8B-exl2-6_5
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/Llama-3-Yggdrasil-8B-exl2 --revision 6_5 --local-dir Llama-3-Yggdrasil-8B-exl2-6.5
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
kimdohoo1102/OPEN-SOLAR-KO-10.7B-sum_0525_1.2 | kimdohoo1102 | 2024-05-24T19:26:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:21:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Poptart777/q-FrozenLake-v1-4x4-noSlippery | Poptart777 | 2024-05-24T19:22:47Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T19:22:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Poptart777/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hgnoi/WYfGbzJx8UKaLFIF | hgnoi | 2024-05-24T19:21:22Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:19:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sorour/llama3_cls_alldata | Sorour | 2024-05-24T19:20:39Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T19:07:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles1_seed1_classic_image_classification_local | Omriy123 | 2024-05-24T19:16:55Z | 224 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T19:02:23Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit_epochs5_batch32_lr5e-05_size224_tiles1_seed1_classic_image_classification_local
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dogs_vs_Cats
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9946666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_epochs5_batch32_lr5e-05_size224_tiles1_seed1_classic_image_classification_local
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0229
- Accuracy: 0.9947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0735 | 1.0 | 469 | 0.0778 | 0.9763 |
| 0.0205 | 2.0 | 938 | 0.0415 | 0.9896 |
| 0.0011 | 3.0 | 1407 | 0.0229 | 0.9947 |
| 0.0008 | 4.0 | 1876 | 0.0247 | 0.9949 |
| 0.0007 | 5.0 | 2345 | 0.0250 | 0.9949 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Sorour/cls_alldata_llama3_v1 | Sorour | 2024-05-24T19:06:36Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-24T06:35:59Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: cls_alldata_llama3_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cls_alldata_llama3_v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6921 | 0.0582 | 20 | 0.6831 |
| 0.5975 | 0.1164 | 40 | 0.6416 |
| 0.6107 | 0.1747 | 60 | 0.6082 |
| 0.5609 | 0.2329 | 80 | 0.5883 |
| 0.5857 | 0.2911 | 100 | 0.5761 |
| 0.5386 | 0.3493 | 120 | 0.5660 |
| 0.5176 | 0.4076 | 140 | 0.5529 |
| 0.5317 | 0.4658 | 160 | 0.5379 |
| 0.5244 | 0.5240 | 180 | 0.5292 |
| 0.5218 | 0.5822 | 200 | 0.5234 |
| 0.5003 | 0.6405 | 220 | 0.5207 |
| 0.5024 | 0.6987 | 240 | 0.5096 |
| 0.4913 | 0.7569 | 260 | 0.5062 |
| 0.5174 | 0.8151 | 280 | 0.5003 |
| 0.4675 | 0.8734 | 300 | 0.4968 |
| 0.5137 | 0.9316 | 320 | 0.4903 |
| 0.4883 | 0.9898 | 340 | 0.4869 |
| 0.3616 | 1.0480 | 360 | 0.4935 |
| 0.3713 | 1.1063 | 380 | 0.4890 |
| 0.365 | 1.1645 | 400 | 0.4856 |
| 0.3732 | 1.2227 | 420 | 0.4838 |
| 0.3717 | 1.2809 | 440 | 0.4842 |
| 0.3657 | 1.3392 | 460 | 0.4811 |
| 0.3767 | 1.3974 | 480 | 0.4762 |
| 0.3859 | 1.4556 | 500 | 0.4763 |
| 0.3773 | 1.5138 | 520 | 0.4712 |
| 0.3615 | 1.5721 | 540 | 0.4671 |
| 0.3656 | 1.6303 | 560 | 0.4666 |
| 0.3497 | 1.6885 | 580 | 0.4658 |
| 0.3818 | 1.7467 | 600 | 0.4621 |
| 0.3759 | 1.8049 | 620 | 0.4626 |
| 0.3539 | 1.8632 | 640 | 0.4551 |
| 0.3985 | 1.9214 | 660 | 0.4525 |
| 0.3668 | 1.9796 | 680 | 0.4523 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
HaitameLaframe/Phi-3mini_4bit_FR | HaitameLaframe | 2024-05-24T19:04:05Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:CampAIgn/Phi-3-mini_16bit",
"base_model:quantized:CampAIgn/Phi-3-mini_16bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-24T19:01:43Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: CampAIgn/Phi-3-mini_16bit
---
# Uploaded model
- **Developed by:** HaitameLaf
- **License:** apache-2.0
- **Finetuned from model :** CampAIgn/Phi-3-mini_16bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thesven/Mistral-7B-Instruct-v0.3-GGUF | thesven | 2024-05-24T19:03:04Z | 18 | 1 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T11:43:08Z | ---
license: apache-2.0
---
# Model Card for Mistral-7B-Instruct-v0.3
## Quantization Description
This repo contains a GGUF Quantized versions of Mistral 7B Instruct v0.3
<div style="text-align: center;">
<a href="https://github.com/thesven/GGUF-n-Go">
<img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;">
</a>
</div>
## Model Description
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
pretrained_model_name = "thesven/Mistral-7B-Instruct-v0.3-GPTQ"
device = "cuda:0"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
# Load the model with the specified configuration and move to device
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name,
device_map="auto",
)
print(model)
# Set EOS token ID
model.eos_token_id = tokenizer.eos_token_id
# Move model to the specified device
model.to(device)
# Define the input text
input_text = "What is PEFT finetuning?"
# Encode the input text
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device)
# Generate output
output = model.generate(input_ids, max_length=1000)
# Decode the generated output
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)
# Print the decoded output
for i, sequence in enumerate(decoded_output):
print(f"Generated Sequence {i+1}: {sequence}")
del model
torch.cuda.empty_cache()
```
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall |
ShenaoZhang/0.01_version_debug_iter_1 | ShenaoZhang | 2024-05-24T19:01:43Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T18:30:13Z | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: 0.01_version_debug_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.01_version_debug_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
amgadhasan/jais-30b-chat-v3-fp16 | amgadhasan | 2024-05-24T18:58:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"jais",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-03-26T12:33:41Z | A fp16 version of [core42/jais-30b-chat-v3](https://huggingface.co/core42/jais-30b-chat-v3)
All rights reserved to the original authors; I don't own this model. |
hgnoi/nF7aetU6KOxC1Pga | hgnoi | 2024-05-24T18:57:44Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T18:56:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
duyntnet/internlm2-chat-7b-imatrix-GGUF | duyntnet | 2024-05-24T18:57:38Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"internlm2-chat-7b",
"text-generation",
"en",
"license:other",
"region:us",
"conversational"
] | text-generation | 2024-05-24T16:24:17Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- internlm2-chat-7b
---
Quantizations of https://huggingface.co/internlm/internlm2-chat-7b
# From original readme
### Import from Transformers
To load the InternLM2 7B Chat model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-7b", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
``` |
jrsky/distilbert-base-uncased-finetuned-emotion | jrsky | 2024-05-24T18:57:35Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-24T18:37:29Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9219257948908514
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2192
- Accuracy: 0.922
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8582 | 1.0 | 250 | 0.3311 | 0.9035 | 0.9034 |
| 0.252 | 2.0 | 500 | 0.2192 | 0.922 | 0.9219 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
comaniac/Meta-Llama-3-8B-Instruct-FP8-v1 | comaniac | 2024-05-24T18:57:21Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2024-05-22T20:03:34Z | ## Llama-3-8B-Instruct-FP8-v1
* Weights and activations are per-tensor quantized to float8_e4m3.
* Quantization with AutoFP8.
* Calibration dataset: Ultrachat (mgoin/ultrachat_2k)
* Samples: 512
* Sequence length: 8192
## Evaluation
TBA |
bryannaufal/trash-clf | bryannaufal | 2024-05-24T18:57:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T15:26:43Z | ---
license: apache-2.0
---
|
xuliu15/openai-whisper-small-English-32r-1h | xuliu15 | 2024-05-24T18:56:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-21T21:14:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vishaltiwari2019/opt-6.7b-lora | Vishaltiwari2019 | 2024-05-24T18:52:29Z | 0 | 4 | transformers | [
"transformers",
"safetensors",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T18:48:26Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/ponymatureSDXL_ponyeclipse10 | Niggendar | 2024-05-24T18:52:08Z | 87 | 2 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-24T18:46:59Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bmehrba/Llama-2-7b-chat-hf-fine-tuned_Llama2_7b_contamination_8digits_Seed2024 | bmehrba | 2024-05-24T18:50:34Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-05-24T18:50:31Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Disty0/sotediffusion-wuerstchen3-alpha3-decoder | Disty0 | 2024-05-24T18:44:56Z | 32 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:other",
"diffusers:StableCascadeDecoderPipeline",
"region:us"
] | text-to-image | 2024-05-24T17:48:41Z | ---
pipeline_tag: text-to-image
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
prior:
- Disty0/sotediffusion-wuerstchen3-alpha3
---
# SoteDiffusion Wuerstchen3
Anime finetune of Würstchen V3.
Currently in active training.
No commercial use.
# Usage
Please refer to the main model: https://huggingface.co/Disty0/sotediffusion-wuerstchen3-alpha3
## Dataset
Used the images that got **extremely aesthetic, best quality** or **very aesthetic, best quality** from the **newest** dataset.
Trained with **120K~** images.
## Training:
**GPU used for training**: 1x AMD RX 7900 XTX 24GB
**Software used**: https://github.com/2kpr/StableCascade
### Config:
```
experiment_id: sotediffusion-wr3_3b-stage_b-alpha3
model_version: 3B
dtype: bfloat16
use_fsdp: False
batch_size: 16
grad_accum_steps: 16
updates: 7680
backup_every: 512
save_every: 256
warmup_updates: 128
lr: 1.0e-5
optimizer_type: Adafactor
adaptive_loss_weight: False
stochastic_rounding: True
image_size: 768
multi_aspect_ratio: [1/1, 1/2, 1/3, 2/3, 3/4, 1/5, 2/5, 3/5, 4/5, 1/6, 5/6, 9/16]
shift: 4
checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/
output_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/
webdataset_path: file:/mnt/DataSSD/AI/anime_image_dataset/best/newest_best.tar
effnet_checkpoint_path: /mnt/DataSSD/AI/models/wuerstchen3/effnet_encoder.safetensors
stage_a_checkpoint_path: /mnt/DataSSD/AI/models/wuerstchen3/stage_a.safetensors
generator_checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/generator-045056.safetensors
```
## Limitations and Bias
### Bias
- This model is intended for anime illustrations.
Realistic capabilites are not tested at all.
### Limitations
- Far shot eyes are can bad.
- Still in active training.
## License
SoteDiffusion models falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify SoteDiffusion models, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
**Notes**: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT. Meaning, still no commercial use of any kind.
|
RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf | RichardErkhov | 2024-05-24T18:44:38Z | 10 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T15:34:55Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Fett-uccine-11B-Experiment - GGUF
- Model creator: https://huggingface.co/saishf/
- Original model: https://huggingface.co/saishf/Fett-uccine-11B-Experiment/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Fett-uccine-11B-Experiment.Q2_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q2_K.gguf) | Q2_K | 3.73GB |
| [Fett-uccine-11B-Experiment.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Fett-uccine-11B-Experiment.IQ3_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Fett-uccine-11B-Experiment.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Fett-uccine-11B-Experiment.IQ3_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Fett-uccine-11B-Experiment.Q3_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q3_K.gguf) | Q3_K | 4.84GB |
| [Fett-uccine-11B-Experiment.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Fett-uccine-11B-Experiment.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Fett-uccine-11B-Experiment.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Fett-uccine-11B-Experiment.Q4_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Fett-uccine-11B-Experiment.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Fett-uccine-11B-Experiment.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Fett-uccine-11B-Experiment.Q4_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q4_K.gguf) | Q4_K | 6.02GB |
| [Fett-uccine-11B-Experiment.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Fett-uccine-11B-Experiment.Q4_1.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Fett-uccine-11B-Experiment.Q5_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Fett-uccine-11B-Experiment.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Fett-uccine-11B-Experiment.Q5_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q5_K.gguf) | Q5_K | 7.08GB |
| [Fett-uccine-11B-Experiment.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Fett-uccine-11B-Experiment.Q5_1.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Fett-uccine-11B-Experiment.Q6_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q6_K.gguf) | Q6_K | 8.2GB |
| [Fett-uccine-11B-Experiment.Q8_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_Fett-uccine-11B-Experiment-gguf/blob/main/Fett-uccine-11B-Experiment.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
base_model:
- Epiculous/Fett-uccine-7B
library_name: transformers
tags:
- mergekit
- merge
license: agpl-3.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
**Scores lower than the original model! Not recommended unless you're experimenting.**
This model is a experiment of using passthrough on 7b models to further merge them with 10.7b/11b models for fun. i doubt there will be any benfits of this model over the orignal.
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Epiculous/Fett-uccine-7B
layer_range: [0, 24]
- sources:
- model: Epiculous/Fett-uccine-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
```
|
Subsets and Splits