Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-feelings-text-classifier-eng
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3930
- Accuracy: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3309 | 1.0 | 564 | 0.2348 | 0.9193 |
| 0.193 | 2.0 | 1128 | 0.2359 | 0.9226 |
| 0.1566 | 3.0 | 1692 | 0.2517 | 0.9241 |
| 0.1275 | 4.0 | 2256 | 0.2629 | 0.9256 |
| 0.1021 | 5.0 | 2820 | 0.3023 | 0.9219 |
| 0.0834 | 6.0 | 3384 | 0.3145 | 0.9193 |
| 0.0679 | 7.0 | 3948 | 0.3338 | 0.9219 |
| 0.0543 | 8.0 | 4512 | 0.3695 | 0.9194 |
| 0.0475 | 9.0 | 5076 | 0.3795 | 0.9198 |
| 0.0389 | 10.0 | 5640 | 0.3930 | 0.9186 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-base-cased-finetuned-feelings-text-classifier-eng", "results": []}]} | LukeGPT88/feelings-text-classifier-eng | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:53:46+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ellis-v3-emotion-leadership-multi-label
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1095
- Accuracy: 0.9728
- F1: 0.9318
- Precision: 0.9346
- Recall: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1396 | 1.0 | 5910 | 0.1235 | 0.9669 | 0.9169 | 0.9211 | 0.9127 |
| 0.1025 | 2.0 | 11820 | 0.1095 | 0.9728 | 0.9318 | 0.9346 | 0.9289 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "microsoft/deberta-v3-small", "model-index": [{"name": "ellis-v3-emotion-leadership-multi-label", "results": []}]} | gsl22/ellis-v3-emotion-leadership-multi-label | null | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:54:34+00:00 |
null | null | {} | michele081tufano/videomae-base-finetuned-ucf101-subset | null | [
"region:us"
] | null | 2024-04-29T12:54:43+00:00 |
|
null | null | {"license": "openrail"} | kaengpil/CBRAP | null | [
"license:openrail",
"region:us"
] | null | 2024-04-29T12:55:16+00:00 |
|
null | null | {} | SageLiao/llava-1.5-7b-hf-ft-mix-vsft | null | [
"region:us"
] | null | 2024-04-29T12:56:22+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GaussianMixture/llama3_tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:56:42+00:00 |
null | null | {"license": "apache-2.0"} | Xenova/movenet-singlepose-lightning | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T12:56:44+00:00 |
|
null | null | {} | TheNiceMamouth/whisper-small-hi | null | [
"region:us"
] | null | 2024-04-29T12:57:15+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EpicJhon/llama_231 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:57:49+00:00 |
null | null |
## Introduction
This repository provides the baseline model files for CNVSRC2024 (Chinese Continuous Visual Speech Recognition Challenge 2024).
## Usage
Please download these model files and use them in the [baseline code](https://github.com/sectum1919/CNVSRC2024Baseline).
## Performance
The following table shows these models' performance on their own tasks.
| Training Data | Task | CER | File Name |
|:-------------------------:|:----------------------:|:------:|:-----------------------------------------|
| CN-CVS (<4s) | Pre-training | / | model_avg_14_23_cncvs_4s.pth |
| CN-CVS (full) | Pre-training | / | model_avg_last10_cncvs_4s_30s.pth |
| CN-CVS + CNVSRC-Single.Dev| Single-speaker VSR (T1)| 39.66% | model_avg_last5_cncvs_cnvsrc-single.pth |
| CN-CVS + CNVSRC-Multi.Dev | Multi-speaker VSR (T2)| 52.20% | model_avg_last5_cncvs_cnvsrc-multi.pth | | {"metrics": ["cer"]} | chenchen2121/CNVSRC2024Baseline | null | [
"region:us"
] | null | 2024-04-29T12:57:59+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1775
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.353 | 1.0 | 694 | 0.2625 | 0.8918 |
| 0.3266 | 2.0 | 1388 | 0.1964 | 0.9224 |
| 0.2636 | 3.0 | 2082 | 0.1775 | 0.9320 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-patch16-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9320024321037698, "name": "Accuracy"}]}]}]} | mansee/vit-base-patch16-224-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:58:13+00:00 |
text-classification | transformers | ## TextAttack Model Card
This `bert` model was fine-tuned using TextAttack. The model was fine-tuned
for 3 epochs with a batch size of 8,
a maximum sequence length of 512, and an initial learning rate of 3e-05.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9656666666666667, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack). | {"language": ["zh"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification"} | WangA/bert-base-finetuned-ctrip | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:58:37+00:00 |
null | null | {} | gcairone/provamodel | null | [
"region:us"
] | null | 2024-04-29T12:58:58+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vanilla_dpo_iter_1
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Rewards/chosen: -0.0991
- Rewards/rejected: -0.2700
- Rewards/accuracies: 0.6800
- Rewards/margins: 0.1709
- Logps/rejected: -284.5185
- Logps/chosen: -293.9490
- Logits/rejected: -2.5507
- Logits/chosen: -2.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6186 | 0.63 | 100 | 0.6274 | -0.0991 | -0.2700 | 0.6800 | 0.1709 | -284.5185 | -293.9490 | -2.5507 | -2.6341 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "vanilla_dpo_iter_1", "results": []}]} | YYYYYYibo/vanilla_dpo_iter_1 | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T13:00:28+00:00 |
null | null | {} | chakkakrishna/bertqa | null | [
"tensorboard",
"region:us"
] | null | 2024-04-29T13:01:17+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/k1h4uct | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:44+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/0rhofi7 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:50+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/7p5e6xs | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:54+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aaditya/Llama3-OpenBioLLM-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/aaditya-Llama3-OpenBioLLM-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("aaditya/Llama3-OpenBioLLM-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aaditya/Llama3-OpenBioLLM-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "aaditya/Llama3-OpenBioLLM-8B"} | PrunaAI/aaditya-Llama3-OpenBioLLM-8B-AWQ-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T13:01:56+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/0clmk5d | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:58+00:00 |
image-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
| {"tags": ["autotrain", "image-classification"], "datasets": ["as-cle-bert/breastcanc-ultrasound-class"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]} | as-cle-bert/bcus-class-segformer | null | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"image-classification",
"autotrain",
"dataset:as-cle-bert/breastcanc-ultrasound-class",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:02:16+00:00 |
null | null | {"license": "apache-2.0"} | syazwansamsol/syazwansamsolAI | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T13:02:44+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_V1-roberta-text-classification-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2641
- Accuracy: 0.9502
- F1: 0.8186
- Precision: 0.8164
- Recall: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6615 | 0.11 | 50 | 1.5503 | 0.3199 | 0.1016 | 0.2599 | 0.1518 |
| 0.6244 | 0.22 | 100 | 0.7198 | 0.7692 | 0.4656 | 0.4593 | 0.4818 |
| 0.3344 | 0.33 | 150 | 0.4852 | 0.8893 | 0.6484 | 0.6264 | 0.6733 |
| 0.2596 | 0.44 | 200 | 0.5277 | 0.8805 | 0.6398 | 0.6124 | 0.6748 |
| 0.2173 | 0.55 | 250 | 0.4417 | 0.8849 | 0.6577 | 0.6421 | 0.6750 |
| 0.2393 | 0.66 | 300 | 0.5221 | 0.8707 | 0.6511 | 0.6361 | 0.6684 |
| 0.2229 | 0.76 | 350 | 0.4997 | 0.8928 | 0.6602 | 0.6410 | 0.6814 |
| 0.1482 | 0.87 | 400 | 0.5111 | 0.8983 | 0.6409 | 0.6131 | 0.6810 |
| 0.1831 | 0.98 | 450 | 0.4251 | 0.8827 | 0.6827 | 0.7149 | 0.6957 |
| 0.1882 | 1.09 | 500 | 0.4130 | 0.9043 | 0.6805 | 0.7998 | 0.6878 |
| 0.1182 | 1.2 | 550 | 0.4513 | 0.9076 | 0.6973 | 0.7703 | 0.7004 |
| 0.101 | 1.31 | 600 | 0.3402 | 0.9221 | 0.7040 | 0.8097 | 0.7036 |
| 0.0749 | 1.42 | 650 | 0.1566 | 0.9658 | 0.8229 | 0.8350 | 0.8122 |
| 0.1294 | 1.53 | 700 | 0.1586 | 0.9675 | 0.8336 | 0.8327 | 0.8346 |
| 0.046 | 1.64 | 750 | 0.2010 | 0.9604 | 0.8264 | 0.8211 | 0.8334 |
| 0.0833 | 1.75 | 800 | 0.1707 | 0.9647 | 0.8285 | 0.8244 | 0.8330 |
| 0.0759 | 1.86 | 850 | 0.1625 | 0.9664 | 0.8278 | 0.8285 | 0.8271 |
| 0.0459 | 1.97 | 900 | 0.1831 | 0.9620 | 0.8258 | 0.8200 | 0.8328 |
| 0.0726 | 2.07 | 950 | 0.1753 | 0.9625 | 0.8279 | 0.8287 | 0.8276 |
| 0.0369 | 2.18 | 1000 | 0.1871 | 0.9650 | 0.8300 | 0.8252 | 0.8362 |
| 0.0456 | 2.29 | 1050 | 0.1524 | 0.9683 | 0.8320 | 0.8278 | 0.8367 |
| 0.0371 | 2.4 | 1100 | 0.1857 | 0.9631 | 0.8280 | 0.8219 | 0.8353 |
| 0.0106 | 2.51 | 1150 | 0.1850 | 0.9661 | 0.8318 | 0.8274 | 0.8370 |
| 0.0173 | 2.62 | 1200 | 0.2055 | 0.9647 | 0.8310 | 0.8259 | 0.8374 |
| 0.036 | 2.73 | 1250 | 0.1699 | 0.9694 | 0.8311 | 0.8267 | 0.8358 |
| 0.0176 | 2.84 | 1300 | 0.1780 | 0.9691 | 0.8325 | 0.8274 | 0.8382 |
| 0.0444 | 2.95 | 1350 | 0.1918 | 0.9672 | 0.8319 | 0.8275 | 0.8371 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "roberta-base", "model-index": [{"name": "final_V1-roberta-text-classification-model", "results": []}]} | AmirlyPhd/final_V1-roberta-text-classification-model | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:03:16+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b"} | PrunaAI/google-gemma-2b-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:03:35+00:00 |
null | diffusers | {} | Priya-H/Tune-A-Video_PanoramaOutput | null | [
"diffusers",
"region:us"
] | null | 2024-04-29T13:03:38+00:00 |
|
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b"} | PrunaAI/google-gemma-2b-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:04:32+00:00 |
null | null | {} | hossein0677/my_awesome_wnut_model | null | [
"region:us"
] | null | 2024-04-29T13:04:51+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hossein0677/token_classification
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1014
- Validation Loss: 0.2498
- Train Precision: 0.6477
- Train Recall: 0.4617
- Train F1: 0.5391
- Train Accuracy: 0.9488
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1014 | 0.2498 | 0.6477 | 0.4617 | 0.5391 | 0.9488 | 0 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "hossein0677/token_classification", "results": []}]} | hossein0677/token_classification | null | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:05:46+00:00 |
text-generation | transformers |
| Tasks |Metric |Value | |Stderr|
|------------|--------|-----:|---|-----:|
|hellaswag_it|acc |0.4502|± |0.0052|
| |acc_norm|0.5994|± |0.0051|
|arc_it |acc |0.0813|± |0.0080|
| |acc_norm|0.4243|± |0.0145| | {"language": ["it"], "license": "apache-2.0", "datasets": ["seeweb/Seeweb-it-292-forLLM"]} | nonsonpratico/phi3-3.8-128k-italian | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"it",
"dataset:seeweb/Seeweb-it-292-forLLM",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:07:12+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-dpo-full-sft-wo-medication_qa
This model is a fine-tuned version of [Minbyul/meditron-7b-wo-medication_qa-sft](https://huggingface.co/Minbyul/meditron-7b-wo-medication_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6347
- Rewards/chosen: -0.0755
- Rewards/rejected: -0.3042
- Rewards/accuracies: 0.7812
- Rewards/margins: 0.2287
- Logps/rejected: -613.1549
- Logps/chosen: -395.2336
- Logits/rejected: -1.0640
- Logits/chosen: -1.1745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "llama2", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/meditron-7b-wo-medication_qa-sft", "model-index": [{"name": "meditron-7b-dpo-full-sft-wo-medication_qa", "results": []}]} | Minbyul/meditron-7b-dpo-full-sft-wo-medication_qa | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/meditron-7b-wo-medication_qa-sft",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:07:13+00:00 |
fill-mask | transformers | {} | jd445/2018 | null | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:07:29+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/7zg0x7v | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:07:42+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | nayan8625/llama3-16bit-finetuned | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:09:35+00:00 |
null | null | {"license": "openrail"} | otmanabs/cam1 | null | [
"safetensors",
"license:openrail",
"region:us"
] | null | 2024-04-29T13:09:36+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: codellama/CodeLlama-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: CodeLlamaTokenizer
is_llama_derived_model: true
hub_model_id: noeloco/modeltest1-dpo
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: noeloco/fizzbuzz-sft
type: alpaca
ds_type: json
hf_use_auth_token: true
push_dataset_to_hub: noeloco
val_set_size: 0.05
output_dir: ./lora-out
chat_template: chatml
rl: dpo
datasets:
- path: noeloco/fizzbuzz-dpo
split: train
data_files:
- /tmp/fizzbuzz-ft/datasets/training-set-dpo.json
#type:
# field_prompt: question
# field_chosen: chosen
# field_rejected: rejected
ds_type: json
#type: intel_apply_chatml
type: chatml.intel
hf_use_auth_token: true
push_dataset_to_hub: noeloco
val_set_size: 0.05
output_dir: ./lora-out
chat_template: chatml
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 16
lora_alpha: 8
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: runpod1
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug: true
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# modeltest1-dpo
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 222
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 | {"license": "llama2", "library_name": "peft", "tags": ["axolotl", "dpo", "trl", "generated_from_trainer"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "modeltest1-dpo", "results": []}]} | noeloco/modeltest1-dpo | null | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"axolotl",
"dpo",
"trl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"region:us"
] | null | 2024-04-29T13:09:38+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "Jacaranda/kiswahili_pretrained_no_ext"} | Jacaranda/UlizaLlama_64_128 | null | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:Jacaranda/kiswahili_pretrained_no_ext",
"region:us"
] | null | 2024-04-29T13:10:10+00:00 |
null | null | {} | MK-CUPIST/crowdhuman_yolov5m | null | [
"region:us"
] | null | 2024-04-29T13:10:16+00:00 |
|
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b"} | PrunaAI/google-gemma-7b-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:10:39+00:00 |
null | null |
# Strangemerges_32Neuralsynthesis-7B
Strangemerges_32Neuralsynthesis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: Gille/StrangeMerges_32-7B-slerp
- model: Kukedlc/NeuralSynthesis-7B-v0.3
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Strangemerges_32Neuralsynthesis-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Strangemerges_32Neuralsynthesis-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T13:10:42+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b"} | PrunaAI/google-gemma-7b-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:10:58+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b-it"} | PrunaAI/google-gemma-7b-it-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:14+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B"} | PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"base_model:NousResearch/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:24+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b-it"} | PrunaAI/google-gemma-7b-it-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:26+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B"} | PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-1bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"base_model:NousResearch/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:42+00:00 |
null | null |
I do not own this model. This is uploaded purely for academic demonstrations.
Credit: [BD](https://twitter.com/br_d)
CivitAI: [link](https://civitai.com/models/121083/blazing-drive) | {"license": "creativeml-openrail-m"} | ironjr/BlazingDriveV13md | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-29T13:11:48+00:00 |
null | null | {} | kpriyanshu256/ptp-wildchat-ds | null | [
"region:us"
] | null | 2024-04-29T13:11:54+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-sft
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2 | {"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-2b-sft", "results": []}]} | DuongTrongChi/gemma-2b-sft | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-04-29T13:14:01+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "microsoft/phi-2"} | TaufikT/code-task | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | 2024-04-29T13:14:14+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b-it"} | PrunaAI/google-gemma-2b-it-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-2b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:15:04+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b-it"} | PrunaAI/google-gemma-2b-it-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"conversational",
"base_model:google/gemma-2b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:15:55+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base_ter
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9640
- Bleu: 0.0101
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.1521 | 1.0 | 2420 | 1.9929 | 0.0101 | 19.0 |
| 2.0942 | 2.0 | 4840 | 1.9640 | 0.0101 | 19.0 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "google-t5/t5-base", "model-index": [{"name": "t5-base_ter", "results": []}]} | lesha-grishchenko/t5-base_ter | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:16:00+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** ayushgs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | ayushgs/llama-3-8b-mh | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:16:02+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi2SFT
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 50
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-2", "model-index": [{"name": "Phi2SFT", "results": []}]} | Abinayasankar/Phi2SFT | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-29T13:16:19+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Abinayasankar/Phi2-SFTFinetuned | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:16:30+00:00 |
null | null | # dolphin-2.9-llama3-8b-GGUF
- Original model: [dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/dolphin-2.9-llama3-8b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: dolphin-2.9-llama3-8b
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dolphin 2.9 Llama 3 8b 🐬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
Discord: https://discord.gg/8fbBeC7ZGx
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
My appreciation for the sponsors of Dolphin 2.9:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
It took 2.5 days on 8x L40S provided by Crusoe Cloud
This model was trained FFT on all parameters, using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
load_in_8bit: false
load_in_4bit: false
strict: false
model_config:
datasets:
- path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl
type: sharegpt
conversation: chatml
chat_template: chatml
dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy
val_set_size: 0.0002
output_dir: ./out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
gradient_accumulation_steps: 4
micro_batch_size: 3
num_epochs: 3
logging_steps: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
wandb_project: dolphin-2.9-mixtral-8x22b
wandb_watch:
wandb_run_id:
wandb_log_model:
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
saves_per_epoch: 4
save_total_limit: 2
save_steps:
evals_per_epoch: 4
eval_sample_packing: false
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
```
</details><br>
## Quants
GGUF : https://huggingface.co/QuantFactory/dolphin-2.9-llama3-8b-GGUF
GGUF with imatrix: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF
Exllamav2: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-exl2
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 7
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
| :-: | :--: | :-: |
| 1.146 | 0.0005 | 1 | 1.1064 |
| 0.6962 | 0.2501 | 555 | 0.6636 |
| 0.6857 | 0.5001 | 1110 | 0.6503 |
| 0.6592 | 0.7502 | 1665 | 0.6419 |
| 0.6465 | 1.0002 | 2220 | 0.6317 |
| 0.5295 | 1.2395 | 2775 | 0.6408 |
| 0.5302 | 1.4895 | 3330 | 0.6351 |
| 0.5188 | 1.7396 | 3885 | 0.6227 |
| 0.521 | 1.9896 | 4440 | 0.6168 |
| 0.3968 | 2.2289 | 4995 | 0.6646 |
| 0.3776 | 2.4789 | 5550 | 0.6619 |
| 0.3983 | 2.7290 | 6105 | 0.6602 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
<!-- original-model-card end -->
| {"license": "other", "tags": ["generated_from_trainer", "axolotl", "GGUF"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"], "base_model": "meta-llama/Meta-Llama-3-8B", "quantized_by": "andrijdavid", "model-index": [{"name": "out", "results": []}]} | LiteLLMs/dolphin-2.9-llama3-8b-GGUF | null | [
"gguf",
"generated_from_trainer",
"axolotl",
"GGUF",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-04-29T13:17:04+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/uoee38m | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:18:20+00:00 |
null | null | {"license": "openrail"} | KeroroK66/korone | null | [
"license:openrail",
"region:us"
] | null | 2024-04-29T13:18:47+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2-705M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.0336 | 1.0 | 3 | 7.3770 |
| 6.2535 | 2.0 | 6 | 6.3128 |
| 5.6213 | 3.0 | 9 | 5.6716 |
| 4.8242 | 4.0 | 12 | 5.1521 |
| 4.6266 | 5.0 | 15 | 4.9789 |
| 4.4097 | 6.0 | 18 | 4.7306 |
| 4.0358 | 7.0 | 21 | 4.5332 |
| 4.0027 | 8.0 | 24 | 4.4014 |
| 3.8638 | 9.0 | 27 | 4.1175 |
| 3.5414 | 10.0 | 30 | 4.0355 |
| 3.4701 | 11.0 | 33 | 3.8834 |
| 3.4822 | 12.0 | 36 | 3.8336 |
| 3.0602 | 13.0 | 39 | 3.7213 |
| 3.1109 | 14.0 | 42 | 3.7379 |
| 2.9087 | 15.0 | 45 | 3.7389 |
| 2.7124 | 16.0 | 48 | 3.6220 |
| 2.5867 | 17.0 | 51 | 3.7192 |
| 2.4577 | 18.0 | 54 | 3.5953 |
| 2.279 | 19.0 | 57 | 3.7648 |
| 2.3218 | 20.0 | 60 | 3.6046 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "GPT2-705M", "results": []}]} | ninagroot/GPT2-705M-RUN4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:18:59+00:00 |
text-classification | transformers | {} | SetarehMohammadpour/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:20:00+00:00 |
|
null | null | {} | snakestamp/rvc-v2-models | null | [
"region:us"
] | null | 2024-04-29T13:20:16+00:00 |
|
null | null | {"license": "openrail"} | KeroroK66/LaplaceDarkness | null | [
"license:openrail",
"region:us"
] | null | 2024-04-29T13:20:32+00:00 |
|
null | null | {"license": "openrail"} | Loren85/Butterfinger-voice-90s-spot-simpson | null | [
"license:openrail",
"region:us"
] | null | 2024-04-29T13:21:20+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t001-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:21:20+00:00 |
|
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/qubvel-hf-co/tuning-sota-cppe5/runs/w975f3w4)
# sensetime-deformable-detr-finetuned-10k-cppe5-reproduce
This model is a fine-tuned version of [SenseTime/deformable-detr](https://huggingface.co/SenseTime/deformable-detr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0727
- Map: 0.3471
- Map 50: 0.6376
- Map 75: 0.3269
- Map Small: 0.2239
- Map Medium: 0.2687
- Map Large: 0.5542
- Mar 1: 0.3146
- Mar 10: 0.485
- Mar 100: 0.5045
- Mar Small: 0.3033
- Mar Medium: 0.4315
- Mar Large: 0.7048
- Map Coverall: 0.556
- Mar 100 Coverall: 0.6671
- Map Face Shield: 0.3346
- Mar 100 Face Shield: 0.4873
- Map Gloves: 0.2879
- Mar 100 Gloves: 0.4625
- Map Goggles: 0.2348
- Mar 100 Goggles: 0.44
- Map Mask: 0.3221
- Mar 100 Mask: 0.4653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| 11.6745 | 0.9953 | 106 | 1.7743 | 0.0167 | 0.0443 | 0.0098 | 0.0113 | 0.0253 | 0.0122 | 0.0331 | 0.1258 | 0.1673 | 0.0575 | 0.155 | 0.178 | 0.0056 | 0.2622 | 0.008 | 0.1329 | 0.0044 | 0.1192 | 0.0001 | 0.0292 | 0.0654 | 0.2929 |
| 1.5556 | 2.0 | 213 | 1.5655 | 0.0293 | 0.0676 | 0.0249 | 0.0265 | 0.0367 | 0.0521 | 0.042 | 0.1789 | 0.2423 | 0.1235 | 0.2148 | 0.3336 | 0.0138 | 0.3158 | 0.0073 | 0.2038 | 0.0123 | 0.2844 | 0.0001 | 0.0185 | 0.1131 | 0.3889 |
| 1.4143 | 2.9953 | 319 | 1.4570 | 0.074 | 0.1565 | 0.0634 | 0.0315 | 0.0468 | 0.0894 | 0.1088 | 0.2778 | 0.3228 | 0.1222 | 0.2836 | 0.3768 | 0.2084 | 0.5955 | 0.013 | 0.2203 | 0.0184 | 0.3112 | 0.0033 | 0.1246 | 0.1267 | 0.3627 |
| 1.2557 | 4.0 | 426 | 1.3799 | 0.1076 | 0.212 | 0.099 | 0.0439 | 0.0618 | 0.1269 | 0.1323 | 0.3206 | 0.3609 | 0.1514 | 0.3238 | 0.4669 | 0.3272 | 0.6257 | 0.0165 | 0.257 | 0.0337 | 0.3634 | 0.0069 | 0.16 | 0.1537 | 0.3987 |
| 1.1941 | 4.9953 | 532 | 1.3209 | 0.1144 | 0.2176 | 0.1087 | 0.0438 | 0.0713 | 0.151 | 0.1434 | 0.3609 | 0.4018 | 0.2001 | 0.3457 | 0.564 | 0.3425 | 0.6644 | 0.0144 | 0.2949 | 0.0407 | 0.3665 | 0.0154 | 0.2585 | 0.1591 | 0.4244 |
| 1.155 | 6.0 | 639 | 1.2990 | 0.1515 | 0.2906 | 0.1425 | 0.0672 | 0.1129 | 0.1945 | 0.1753 | 0.3935 | 0.4237 | 0.1877 | 0.368 | 0.598 | 0.4158 | 0.6707 | 0.0406 | 0.3203 | 0.0925 | 0.3951 | 0.0189 | 0.3138 | 0.1894 | 0.4187 |
| 1.1215 | 6.9953 | 745 | 1.2747 | 0.1571 | 0.309 | 0.143 | 0.0696 | 0.125 | 0.1889 | 0.1724 | 0.3909 | 0.43 | 0.2033 | 0.3851 | 0.604 | 0.4395 | 0.6743 | 0.0498 | 0.381 | 0.0871 | 0.4036 | 0.018 | 0.2692 | 0.1911 | 0.4218 |
| 1.0717 | 8.0 | 852 | 1.2178 | 0.1571 | 0.3227 | 0.1375 | 0.0901 | 0.1195 | 0.2164 | 0.198 | 0.4041 | 0.4433 | 0.2476 | 0.4 | 0.5814 | 0.4163 | 0.6811 | 0.0407 | 0.3582 | 0.0893 | 0.4116 | 0.049 | 0.3246 | 0.1903 | 0.4409 |
| 1.0565 | 8.9953 | 958 | 1.2809 | 0.1655 | 0.3272 | 0.1587 | 0.091 | 0.1352 | 0.2086 | 0.2214 | 0.4057 | 0.4395 | 0.2224 | 0.3794 | 0.6548 | 0.3741 | 0.6495 | 0.0995 | 0.4316 | 0.098 | 0.3714 | 0.0485 | 0.3277 | 0.2076 | 0.4173 |
| 1.0112 | 10.0 | 1065 | 1.1774 | 0.2008 | 0.3844 | 0.2 | 0.0948 | 0.1963 | 0.2942 | 0.2258 | 0.4368 | 0.4664 | 0.273 | 0.4293 | 0.6294 | 0.4777 | 0.6856 | 0.0681 | 0.343 | 0.1224 | 0.4321 | 0.09 | 0.3938 | 0.246 | 0.4773 |
| 0.9978 | 10.9953 | 1171 | 1.1756 | 0.2299 | 0.4406 | 0.2158 | 0.0987 | 0.208 | 0.3558 | 0.2403 | 0.4529 | 0.4784 | 0.2767 | 0.4338 | 0.6759 | 0.474 | 0.6856 | 0.1359 | 0.4127 | 0.1528 | 0.4388 | 0.1218 | 0.3923 | 0.2652 | 0.4627 |
| 0.9543 | 12.0 | 1278 | 1.1263 | 0.2416 | 0.459 | 0.226 | 0.129 | 0.2064 | 0.3378 | 0.2497 | 0.4603 | 0.4893 | 0.3098 | 0.439 | 0.6639 | 0.51 | 0.6892 | 0.133 | 0.4228 | 0.1824 | 0.4393 | 0.116 | 0.4385 | 0.2665 | 0.4569 |
| 0.9578 | 12.9953 | 1384 | 1.1217 | 0.2622 | 0.4835 | 0.2467 | 0.1248 | 0.2321 | 0.4226 | 0.2676 | 0.4575 | 0.4884 | 0.3023 | 0.4572 | 0.661 | 0.5304 | 0.6977 | 0.172 | 0.4139 | 0.1784 | 0.4603 | 0.1386 | 0.4062 | 0.2917 | 0.464 |
| 0.9196 | 14.0 | 1491 | 1.1083 | 0.26 | 0.4885 | 0.2469 | 0.1338 | 0.2185 | 0.3915 | 0.2684 | 0.4657 | 0.4912 | 0.2804 | 0.4423 | 0.6744 | 0.5478 | 0.7005 | 0.1698 | 0.4684 | 0.1808 | 0.4246 | 0.1119 | 0.4 | 0.2895 | 0.4627 |
| 0.8938 | 14.9953 | 1597 | 1.1487 | 0.2506 | 0.4849 | 0.2305 | 0.1269 | 0.2174 | 0.3739 | 0.2597 | 0.4527 | 0.4777 | 0.2813 | 0.4375 | 0.6726 | 0.5043 | 0.6748 | 0.1642 | 0.462 | 0.1753 | 0.4321 | 0.125 | 0.3785 | 0.2842 | 0.4413 |
| 0.8907 | 16.0 | 1704 | 1.1091 | 0.2545 | 0.4905 | 0.2477 | 0.1267 | 0.2276 | 0.392 | 0.2634 | 0.4574 | 0.4881 | 0.3075 | 0.4439 | 0.6852 | 0.524 | 0.695 | 0.1588 | 0.4443 | 0.188 | 0.4634 | 0.1298 | 0.3815 | 0.2717 | 0.4564 |
| 0.8669 | 16.9953 | 1810 | 1.0938 | 0.2838 | 0.5311 | 0.274 | 0.1446 | 0.2359 | 0.4327 | 0.2853 | 0.4676 | 0.49 | 0.3005 | 0.444 | 0.6873 | 0.5439 | 0.6905 | 0.2054 | 0.457 | 0.2177 | 0.4487 | 0.1542 | 0.4046 | 0.2976 | 0.4493 |
| 0.8495 | 18.0 | 1917 | 1.0866 | 0.2755 | 0.5134 | 0.2554 | 0.1362 | 0.2355 | 0.4541 | 0.2681 | 0.4548 | 0.4799 | 0.2886 | 0.4362 | 0.6823 | 0.5533 | 0.6995 | 0.1514 | 0.4418 | 0.2165 | 0.4598 | 0.159 | 0.3492 | 0.2974 | 0.4493 |
| 0.8421 | 18.9953 | 2023 | 1.0856 | 0.2719 | 0.5259 | 0.2519 | 0.1366 | 0.2341 | 0.4521 | 0.28 | 0.4539 | 0.4769 | 0.3123 | 0.4265 | 0.6567 | 0.5057 | 0.6459 | 0.1693 | 0.4418 | 0.2233 | 0.4629 | 0.1574 | 0.3677 | 0.304 | 0.4662 |
| 0.813 | 20.0 | 2130 | 1.0726 | 0.2846 | 0.5348 | 0.2736 | 0.1497 | 0.2304 | 0.4655 | 0.2851 | 0.4696 | 0.4932 | 0.3084 | 0.4391 | 0.6959 | 0.5259 | 0.6586 | 0.1864 | 0.5051 | 0.2468 | 0.4406 | 0.1622 | 0.3969 | 0.3018 | 0.4649 |
| 0.8215 | 20.9953 | 2236 | 1.0397 | 0.2966 | 0.5453 | 0.2992 | 0.1696 | 0.2338 | 0.4806 | 0.2887 | 0.4835 | 0.5056 | 0.3419 | 0.4607 | 0.694 | 0.5695 | 0.6973 | 0.1967 | 0.5051 | 0.2348 | 0.4647 | 0.1719 | 0.3815 | 0.3103 | 0.4796 |
| 0.7876 | 22.0 | 2343 | 1.0477 | 0.2908 | 0.5476 | 0.2759 | 0.1466 | 0.2439 | 0.4577 | 0.287 | 0.4747 | 0.5011 | 0.3174 | 0.4614 | 0.6895 | 0.5722 | 0.6959 | 0.1964 | 0.4848 | 0.2358 | 0.4656 | 0.1462 | 0.3892 | 0.3034 | 0.4698 |
| 0.8012 | 22.9953 | 2449 | 1.0780 | 0.2829 | 0.5255 | 0.2685 | 0.137 | 0.2255 | 0.4389 | 0.2786 | 0.4624 | 0.4898 | 0.2655 | 0.44 | 0.689 | 0.5429 | 0.6676 | 0.2043 | 0.4772 | 0.2418 | 0.4616 | 0.1156 | 0.3646 | 0.3101 | 0.4782 |
| 0.7738 | 24.0 | 2556 | 1.0447 | 0.3086 | 0.5717 | 0.296 | 0.18 | 0.2464 | 0.4829 | 0.286 | 0.4777 | 0.5067 | 0.3497 | 0.4605 | 0.6955 | 0.5613 | 0.6815 | 0.2118 | 0.4797 | 0.2567 | 0.4679 | 0.1977 | 0.4123 | 0.3156 | 0.492 |
| 0.7738 | 24.9953 | 2662 | 1.0471 | 0.3099 | 0.5827 | 0.2977 | 0.1699 | 0.2539 | 0.4816 | 0.2908 | 0.465 | 0.4846 | 0.3136 | 0.4363 | 0.6683 | 0.5595 | 0.6761 | 0.2114 | 0.4468 | 0.2579 | 0.4571 | 0.2039 | 0.3846 | 0.3166 | 0.4582 |
| 0.7549 | 26.0 | 2769 | 1.0618 | 0.3113 | 0.5754 | 0.289 | 0.1962 | 0.2495 | 0.4786 | 0.2938 | 0.4705 | 0.4954 | 0.2923 | 0.4491 | 0.6872 | 0.5627 | 0.6802 | 0.2508 | 0.4899 | 0.2579 | 0.45 | 0.1777 | 0.3954 | 0.3074 | 0.4613 |
| 0.7715 | 26.9953 | 2875 | 1.0328 | 0.313 | 0.5867 | 0.2898 | 0.1915 | 0.2419 | 0.4874 | 0.2935 | 0.4789 | 0.5051 | 0.3092 | 0.4417 | 0.7034 | 0.5449 | 0.6815 | 0.2329 | 0.4861 | 0.26 | 0.4741 | 0.1915 | 0.4 | 0.3356 | 0.484 |
| 0.7386 | 28.0 | 2982 | 1.0318 | 0.3182 | 0.5894 | 0.2969 | 0.2087 | 0.2584 | 0.4936 | 0.3043 | 0.4905 | 0.5203 | 0.3489 | 0.463 | 0.6977 | 0.5581 | 0.6761 | 0.2404 | 0.5 | 0.2663 | 0.4871 | 0.1996 | 0.4431 | 0.3264 | 0.4951 |
| 0.7388 | 28.9953 | 3088 | 1.0309 | 0.3151 | 0.5908 | 0.2953 | 0.1714 | 0.2506 | 0.4845 | 0.299 | 0.4833 | 0.507 | 0.3136 | 0.4511 | 0.7004 | 0.5507 | 0.6874 | 0.2508 | 0.4937 | 0.2607 | 0.4696 | 0.191 | 0.4138 | 0.3224 | 0.4707 |
| 0.7092 | 30.0 | 3195 | 1.0388 | 0.3165 | 0.5937 | 0.2909 | 0.1496 | 0.2446 | 0.5126 | 0.2976 | 0.4785 | 0.5024 | 0.2835 | 0.4463 | 0.6904 | 0.5549 | 0.6838 | 0.2498 | 0.4962 | 0.2718 | 0.467 | 0.2004 | 0.4062 | 0.3053 | 0.4591 |
| 0.7114 | 30.9953 | 3301 | 1.0427 | 0.3102 | 0.5886 | 0.286 | 0.1976 | 0.2561 | 0.4823 | 0.2985 | 0.4676 | 0.4946 | 0.2994 | 0.445 | 0.7112 | 0.5535 | 0.6806 | 0.2505 | 0.4975 | 0.2679 | 0.4621 | 0.167 | 0.3769 | 0.3123 | 0.456 |
| 0.7032 | 32.0 | 3408 | 1.0404 | 0.3168 | 0.5916 | 0.3 | 0.2018 | 0.2554 | 0.5038 | 0.2933 | 0.4776 | 0.5102 | 0.3669 | 0.4527 | 0.7007 | 0.5593 | 0.6887 | 0.244 | 0.4823 | 0.2684 | 0.4853 | 0.1801 | 0.4123 | 0.332 | 0.4822 |
| 0.7006 | 32.9953 | 3514 | 1.0246 | 0.3287 | 0.5948 | 0.3158 | 0.21 | 0.2597 | 0.5103 | 0.304 | 0.4967 | 0.522 | 0.3417 | 0.4715 | 0.7227 | 0.5832 | 0.7014 | 0.2496 | 0.519 | 0.2767 | 0.475 | 0.1936 | 0.4308 | 0.3405 | 0.484 |
| 0.6886 | 34.0 | 3621 | 1.0414 | 0.3186 | 0.5938 | 0.2917 | 0.1831 | 0.2501 | 0.5168 | 0.2984 | 0.4753 | 0.5011 | 0.2782 | 0.4546 | 0.689 | 0.5766 | 0.7086 | 0.2695 | 0.5089 | 0.2707 | 0.4629 | 0.1621 | 0.3738 | 0.3141 | 0.4511 |
| 0.7005 | 34.9953 | 3727 | 1.0315 | 0.3403 | 0.6141 | 0.3297 | 0.1953 | 0.264 | 0.5513 | 0.3007 | 0.4799 | 0.5053 | 0.2818 | 0.4534 | 0.7013 | 0.5757 | 0.6896 | 0.3081 | 0.5063 | 0.268 | 0.4621 | 0.2108 | 0.3908 | 0.339 | 0.4778 |
| 0.6736 | 36.0 | 3834 | 1.0174 | 0.3302 | 0.5993 | 0.3115 | 0.2041 | 0.263 | 0.5133 | 0.3003 | 0.4861 | 0.5088 | 0.2864 | 0.4674 | 0.69 | 0.5769 | 0.6973 | 0.2593 | 0.4684 | 0.278 | 0.492 | 0.2139 | 0.4231 | 0.323 | 0.4636 |
| 0.6713 | 36.9953 | 3940 | 1.0260 | 0.3312 | 0.5975 | 0.3118 | 0.2167 | 0.2652 | 0.5174 | 0.3112 | 0.4891 | 0.5127 | 0.3276 | 0.4565 | 0.7097 | 0.5684 | 0.6914 | 0.2784 | 0.5063 | 0.2671 | 0.4603 | 0.2226 | 0.4354 | 0.3192 | 0.4702 |
| 0.6572 | 38.0 | 4047 | 1.0167 | 0.3431 | 0.6228 | 0.3377 | 0.2027 | 0.2728 | 0.5576 | 0.3079 | 0.4877 | 0.5142 | 0.3312 | 0.4538 | 0.7063 | 0.5705 | 0.6811 | 0.2904 | 0.5063 | 0.2847 | 0.4817 | 0.2318 | 0.4169 | 0.3381 | 0.4849 |
| 0.6608 | 38.9953 | 4153 | 1.0307 | 0.3428 | 0.6126 | 0.3359 | 0.21 | 0.2775 | 0.5477 | 0.3043 | 0.4884 | 0.515 | 0.314 | 0.4668 | 0.7124 | 0.5706 | 0.691 | 0.3056 | 0.5367 | 0.2919 | 0.479 | 0.2123 | 0.3923 | 0.3336 | 0.476 |
| 0.63 | 40.0 | 4260 | 1.0264 | 0.3443 | 0.6177 | 0.3276 | 0.2114 | 0.2806 | 0.5421 | 0.3121 | 0.4916 | 0.5202 | 0.3188 | 0.4705 | 0.7166 | 0.5783 | 0.691 | 0.3117 | 0.5291 | 0.2732 | 0.4817 | 0.2299 | 0.4308 | 0.3282 | 0.4684 |
| 0.6396 | 40.9953 | 4366 | 1.0215 | 0.3425 | 0.621 | 0.3171 | 0.2082 | 0.2728 | 0.5362 | 0.3092 | 0.4871 | 0.5079 | 0.2951 | 0.4554 | 0.7131 | 0.5877 | 0.7027 | 0.3052 | 0.4937 | 0.288 | 0.475 | 0.1991 | 0.4 | 0.3324 | 0.468 |
| 0.625 | 42.0 | 4473 | 1.0178 | 0.3484 | 0.6218 | 0.3461 | 0.2131 | 0.2829 | 0.5369 | 0.3012 | 0.4949 | 0.5168 | 0.3244 | 0.4631 | 0.7096 | 0.5735 | 0.6932 | 0.3088 | 0.5127 | 0.2923 | 0.479 | 0.2422 | 0.4246 | 0.3251 | 0.4742 |
| 0.6251 | 42.9953 | 4579 | 1.0301 | 0.3461 | 0.6247 | 0.3439 | 0.2131 | 0.2702 | 0.5491 | 0.306 | 0.4934 | 0.5121 | 0.2848 | 0.4461 | 0.7165 | 0.5833 | 0.6901 | 0.301 | 0.5051 | 0.2873 | 0.4835 | 0.2335 | 0.4123 | 0.3255 | 0.4693 |
| 0.6135 | 44.0 | 4686 | 1.0139 | 0.3493 | 0.6317 | 0.3372 | 0.2248 | 0.2862 | 0.5572 | 0.3072 | 0.4894 | 0.5117 | 0.3177 | 0.4626 | 0.7082 | 0.5812 | 0.6914 | 0.3109 | 0.5152 | 0.2954 | 0.4839 | 0.223 | 0.3846 | 0.3361 | 0.4836 |
| 0.6144 | 44.9953 | 4792 | 1.0285 | 0.3509 | 0.6356 | 0.3476 | 0.224 | 0.2788 | 0.5406 | 0.3077 | 0.4964 | 0.5158 | 0.3294 | 0.4636 | 0.7093 | 0.5641 | 0.6739 | 0.3102 | 0.5013 | 0.2928 | 0.4839 | 0.2503 | 0.4338 | 0.3372 | 0.4862 |
| 0.6108 | 46.0 | 4899 | 1.0337 | 0.3399 | 0.6192 | 0.3317 | 0.2009 | 0.2756 | 0.5243 | 0.2977 | 0.4824 | 0.503 | 0.2895 | 0.4587 | 0.6878 | 0.5765 | 0.6788 | 0.3125 | 0.4975 | 0.2873 | 0.4719 | 0.2004 | 0.3923 | 0.323 | 0.4747 |
| 0.6081 | 46.9953 | 5005 | 1.0228 | 0.3444 | 0.6169 | 0.341 | 0.2173 | 0.281 | 0.5443 | 0.3041 | 0.4928 | 0.5181 | 0.3604 | 0.462 | 0.7082 | 0.5721 | 0.6887 | 0.313 | 0.5038 | 0.2941 | 0.4754 | 0.2061 | 0.4292 | 0.3367 | 0.4933 |
| 0.5857 | 48.0 | 5112 | 1.0137 | 0.348 | 0.6196 | 0.3419 | 0.2091 | 0.2733 | 0.5542 | 0.308 | 0.4916 | 0.5113 | 0.297 | 0.4554 | 0.7015 | 0.5836 | 0.6905 | 0.3218 | 0.5089 | 0.2964 | 0.4848 | 0.214 | 0.3908 | 0.3241 | 0.4813 |
| 0.5963 | 48.9953 | 5218 | 1.0341 | 0.3433 | 0.6262 | 0.3411 | 0.218 | 0.2703 | 0.5395 | 0.3019 | 0.4899 | 0.5166 | 0.3304 | 0.4661 | 0.7068 | 0.5829 | 0.6941 | 0.2951 | 0.5013 | 0.2938 | 0.4888 | 0.2044 | 0.4123 | 0.3405 | 0.4862 |
| 0.5828 | 50.0 | 5325 | 1.0220 | 0.3402 | 0.6157 | 0.3433 | 0.2175 | 0.2748 | 0.5347 | 0.3054 | 0.496 | 0.5184 | 0.321 | 0.4733 | 0.7146 | 0.5809 | 0.6977 | 0.3071 | 0.4975 | 0.29 | 0.4857 | 0.1992 | 0.4292 | 0.3238 | 0.4818 |
| 0.583 | 50.9953 | 5431 | 1.0163 | 0.3483 | 0.6272 | 0.3378 | 0.2054 | 0.2819 | 0.554 | 0.3081 | 0.4954 | 0.5174 | 0.3028 | 0.4757 | 0.701 | 0.5748 | 0.6896 | 0.3137 | 0.5165 | 0.2908 | 0.4866 | 0.234 | 0.4185 | 0.3284 | 0.476 |
| 0.5574 | 52.0 | 5538 | 1.0144 | 0.3462 | 0.6241 | 0.3344 | 0.2138 | 0.2757 | 0.5384 | 0.3036 | 0.4917 | 0.5163 | 0.3087 | 0.4663 | 0.7117 | 0.5794 | 0.6941 | 0.2983 | 0.4987 | 0.3086 | 0.4853 | 0.222 | 0.4385 | 0.3225 | 0.4649 |
| 0.565 | 52.9953 | 5644 | 1.0384 | 0.3361 | 0.6159 | 0.3305 | 0.2078 | 0.2691 | 0.5349 | 0.3004 | 0.4874 | 0.511 | 0.3151 | 0.4652 | 0.7071 | 0.5649 | 0.6851 | 0.2748 | 0.4886 | 0.2937 | 0.4888 | 0.222 | 0.42 | 0.325 | 0.4724 |
| 0.5545 | 54.0 | 5751 | 1.0353 | 0.338 | 0.6151 | 0.3341 | 0.2055 | 0.2616 | 0.5487 | 0.3045 | 0.4796 | 0.5039 | 0.334 | 0.4435 | 0.7153 | 0.5729 | 0.6779 | 0.2751 | 0.4797 | 0.2887 | 0.4714 | 0.2189 | 0.4092 | 0.3342 | 0.4813 |
| 0.5533 | 54.9953 | 5857 | 1.0273 | 0.3478 | 0.6322 | 0.3282 | 0.2019 | 0.2765 | 0.5498 | 0.3091 | 0.4827 | 0.5049 | 0.289 | 0.4568 | 0.7001 | 0.5784 | 0.6815 | 0.3102 | 0.4975 | 0.2974 | 0.4799 | 0.2306 | 0.4031 | 0.3223 | 0.4627 |
| 0.5343 | 56.0 | 5964 | 1.0379 | 0.3489 | 0.631 | 0.3353 | 0.2161 | 0.2711 | 0.5601 | 0.3114 | 0.4885 | 0.5098 | 0.3114 | 0.4584 | 0.721 | 0.5659 | 0.6824 | 0.3237 | 0.5051 | 0.3084 | 0.4853 | 0.2272 | 0.4092 | 0.3193 | 0.4671 |
| 0.5468 | 56.9953 | 6070 | 1.0393 | 0.3454 | 0.6401 | 0.3253 | 0.2075 | 0.2698 | 0.5397 | 0.3079 | 0.4848 | 0.5033 | 0.3134 | 0.4366 | 0.686 | 0.5642 | 0.668 | 0.3186 | 0.4797 | 0.2974 | 0.4915 | 0.2271 | 0.4092 | 0.3196 | 0.468 |
| 0.5431 | 58.0 | 6177 | 1.0278 | 0.344 | 0.63 | 0.3249 | 0.2067 | 0.2747 | 0.5383 | 0.3093 | 0.4896 | 0.5083 | 0.2966 | 0.4465 | 0.7067 | 0.5634 | 0.6671 | 0.3113 | 0.4962 | 0.2989 | 0.4902 | 0.2229 | 0.4138 | 0.3235 | 0.4742 |
| 0.5339 | 58.9953 | 6283 | 1.0319 | 0.3408 | 0.6341 | 0.3086 | 0.214 | 0.2713 | 0.5379 | 0.3031 | 0.4853 | 0.505 | 0.2896 | 0.4521 | 0.7106 | 0.5694 | 0.6815 | 0.2998 | 0.4709 | 0.3092 | 0.4978 | 0.2052 | 0.4092 | 0.3205 | 0.4658 |
| 0.5334 | 60.0 | 6390 | 1.0279 | 0.3496 | 0.6412 | 0.3359 | 0.2139 | 0.2859 | 0.5352 | 0.3089 | 0.4929 | 0.5133 | 0.3019 | 0.4679 | 0.7023 | 0.5635 | 0.6802 | 0.3222 | 0.5 | 0.3064 | 0.4879 | 0.2309 | 0.4277 | 0.3249 | 0.4707 |
| 0.5289 | 60.9953 | 6496 | 1.0478 | 0.3506 | 0.6433 | 0.321 | 0.2228 | 0.2723 | 0.5336 | 0.304 | 0.4892 | 0.5068 | 0.297 | 0.4413 | 0.693 | 0.5761 | 0.6838 | 0.3275 | 0.4873 | 0.303 | 0.4696 | 0.2246 | 0.4308 | 0.3221 | 0.4622 |
| 0.5091 | 62.0 | 6603 | 1.0321 | 0.3484 | 0.6408 | 0.3242 | 0.2247 | 0.2711 | 0.5639 | 0.309 | 0.4896 | 0.5119 | 0.3139 | 0.4471 | 0.7109 | 0.5561 | 0.6779 | 0.3219 | 0.4848 | 0.297 | 0.4951 | 0.245 | 0.4308 | 0.3222 | 0.4707 |
| 0.5239 | 62.9953 | 6709 | 1.0277 | 0.3516 | 0.6421 | 0.3371 | 0.2329 | 0.2764 | 0.5504 | 0.3045 | 0.491 | 0.5096 | 0.3178 | 0.4378 | 0.7055 | 0.5655 | 0.6766 | 0.3156 | 0.4646 | 0.3137 | 0.4973 | 0.2267 | 0.44 | 0.3365 | 0.4693 |
| 0.5025 | 64.0 | 6816 | 1.0331 | 0.3474 | 0.6385 | 0.3144 | 0.2295 | 0.286 | 0.5378 | 0.3095 | 0.4949 | 0.5131 | 0.3183 | 0.4571 | 0.7047 | 0.5603 | 0.6878 | 0.3093 | 0.4722 | 0.3043 | 0.4866 | 0.235 | 0.4538 | 0.328 | 0.4649 |
| 0.4991 | 64.9953 | 6922 | 1.0323 | 0.3457 | 0.6359 | 0.3158 | 0.2193 | 0.2808 | 0.5367 | 0.3091 | 0.485 | 0.507 | 0.2986 | 0.4511 | 0.7043 | 0.5653 | 0.6838 | 0.3106 | 0.481 | 0.2987 | 0.4804 | 0.2308 | 0.4169 | 0.3229 | 0.4729 |
| 0.4969 | 66.0 | 7029 | 1.0250 | 0.3448 | 0.6468 | 0.317 | 0.23 | 0.2781 | 0.5449 | 0.3031 | 0.4919 | 0.5109 | 0.3493 | 0.4479 | 0.6951 | 0.5477 | 0.6707 | 0.311 | 0.4911 | 0.3046 | 0.4924 | 0.2424 | 0.4369 | 0.318 | 0.4631 |
| 0.496 | 66.9953 | 7135 | 1.0300 | 0.3461 | 0.6272 | 0.325 | 0.2262 | 0.2772 | 0.5318 | 0.3097 | 0.4913 | 0.5118 | 0.3102 | 0.4598 | 0.6967 | 0.5751 | 0.6941 | 0.3067 | 0.4696 | 0.3018 | 0.4812 | 0.2244 | 0.4415 | 0.3224 | 0.4724 |
| 0.4788 | 68.0 | 7242 | 1.0452 | 0.3402 | 0.6215 | 0.3124 | 0.2249 | 0.2665 | 0.5439 | 0.3048 | 0.4923 | 0.5102 | 0.338 | 0.4531 | 0.7045 | 0.5618 | 0.6806 | 0.3051 | 0.4797 | 0.2956 | 0.4741 | 0.2261 | 0.4492 | 0.3127 | 0.4671 |
| 0.4807 | 68.9953 | 7348 | 1.0359 | 0.3461 | 0.6413 | 0.3141 | 0.2239 | 0.2736 | 0.5437 | 0.3079 | 0.4836 | 0.5028 | 0.3072 | 0.4486 | 0.6881 | 0.5605 | 0.6685 | 0.3124 | 0.4696 | 0.3014 | 0.4808 | 0.2347 | 0.4231 | 0.3216 | 0.472 |
| 0.4756 | 70.0 | 7455 | 1.0310 | 0.3522 | 0.6459 | 0.3281 | 0.222 | 0.2816 | 0.5535 | 0.3078 | 0.4878 | 0.5082 | 0.2998 | 0.4607 | 0.7001 | 0.5784 | 0.6919 | 0.3348 | 0.4823 | 0.2935 | 0.4763 | 0.2294 | 0.4169 | 0.325 | 0.4738 |
| 0.4738 | 70.9953 | 7561 | 1.0440 | 0.3422 | 0.6271 | 0.3121 | 0.2157 | 0.2757 | 0.5429 | 0.31 | 0.483 | 0.5072 | 0.3103 | 0.4482 | 0.6956 | 0.5563 | 0.6703 | 0.3091 | 0.4835 | 0.2995 | 0.4848 | 0.225 | 0.4277 | 0.3208 | 0.4698 |
| 0.4674 | 72.0 | 7668 | 1.0492 | 0.3474 | 0.6321 | 0.3275 | 0.2241 | 0.2692 | 0.548 | 0.314 | 0.4834 | 0.5043 | 0.2946 | 0.4371 | 0.707 | 0.5626 | 0.6703 | 0.3187 | 0.4975 | 0.295 | 0.4754 | 0.2413 | 0.4123 | 0.3192 | 0.4658 |
| 0.4592 | 72.9953 | 7774 | 1.0344 | 0.3545 | 0.6457 | 0.3418 | 0.2373 | 0.2789 | 0.559 | 0.314 | 0.4912 | 0.5084 | 0.3124 | 0.4394 | 0.6999 | 0.5673 | 0.6766 | 0.3326 | 0.4949 | 0.3024 | 0.4728 | 0.2468 | 0.4277 | 0.3234 | 0.4702 |
| 0.4584 | 74.0 | 7881 | 1.0381 | 0.3451 | 0.6271 | 0.3398 | 0.2388 | 0.2702 | 0.5483 | 0.3144 | 0.4882 | 0.5113 | 0.3169 | 0.4468 | 0.7077 | 0.5631 | 0.6788 | 0.3154 | 0.4873 | 0.2912 | 0.4812 | 0.2251 | 0.4323 | 0.3307 | 0.4769 |
| 0.4648 | 74.9953 | 7987 | 1.0515 | 0.3506 | 0.6384 | 0.3369 | 0.2269 | 0.2659 | 0.5582 | 0.3168 | 0.4842 | 0.5067 | 0.3154 | 0.4328 | 0.6958 | 0.5646 | 0.6694 | 0.3313 | 0.4886 | 0.2944 | 0.4741 | 0.2264 | 0.4262 | 0.3365 | 0.4751 |
| 0.4487 | 76.0 | 8094 | 1.0408 | 0.3479 | 0.6399 | 0.3232 | 0.2223 | 0.2783 | 0.5555 | 0.3149 | 0.486 | 0.5077 | 0.322 | 0.4466 | 0.699 | 0.5573 | 0.6671 | 0.3233 | 0.4924 | 0.2944 | 0.4795 | 0.2408 | 0.4369 | 0.3235 | 0.4627 |
| 0.4482 | 76.9953 | 8200 | 1.0368 | 0.3505 | 0.6396 | 0.3153 | 0.2229 | 0.2796 | 0.5552 | 0.3173 | 0.487 | 0.508 | 0.3088 | 0.4459 | 0.7042 | 0.5604 | 0.6716 | 0.3201 | 0.4911 | 0.2959 | 0.4786 | 0.2532 | 0.4292 | 0.323 | 0.4693 |
| 0.4359 | 78.0 | 8307 | 1.0456 | 0.3488 | 0.6434 | 0.3181 | 0.2308 | 0.2708 | 0.549 | 0.3117 | 0.4823 | 0.5029 | 0.3098 | 0.4397 | 0.6933 | 0.5704 | 0.6716 | 0.3174 | 0.4823 | 0.2983 | 0.4777 | 0.2353 | 0.4185 | 0.3227 | 0.4644 |
| 0.445 | 78.9953 | 8413 | 1.0598 | 0.3472 | 0.638 | 0.3149 | 0.2236 | 0.2719 | 0.5492 | 0.313 | 0.4873 | 0.5051 | 0.2966 | 0.4461 | 0.7045 | 0.5497 | 0.6649 | 0.3331 | 0.5013 | 0.2907 | 0.4737 | 0.2397 | 0.4231 | 0.3226 | 0.4627 |
| 0.4328 | 80.0 | 8520 | 1.0558 | 0.3477 | 0.6419 | 0.3326 | 0.2335 | 0.2701 | 0.5483 | 0.3112 | 0.4865 | 0.506 | 0.3232 | 0.432 | 0.7048 | 0.5599 | 0.668 | 0.3244 | 0.4823 | 0.292 | 0.4772 | 0.2396 | 0.4369 | 0.3228 | 0.4658 |
| 0.4321 | 80.9953 | 8626 | 1.0447 | 0.3474 | 0.6387 | 0.3308 | 0.2348 | 0.2749 | 0.5457 | 0.3155 | 0.485 | 0.506 | 0.3189 | 0.4442 | 0.6897 | 0.5604 | 0.6689 | 0.3165 | 0.4785 | 0.2932 | 0.4777 | 0.2362 | 0.4338 | 0.3307 | 0.4711 |
| 0.4318 | 82.0 | 8733 | 1.0476 | 0.3496 | 0.6408 | 0.3292 | 0.2301 | 0.2767 | 0.543 | 0.3118 | 0.4895 | 0.51 | 0.3248 | 0.4397 | 0.6942 | 0.5553 | 0.6734 | 0.3188 | 0.4797 | 0.2965 | 0.479 | 0.2481 | 0.4462 | 0.3296 | 0.4716 |
| 0.4273 | 82.9953 | 8839 | 1.0585 | 0.348 | 0.6396 | 0.3286 | 0.2309 | 0.2694 | 0.5526 | 0.3126 | 0.4843 | 0.5035 | 0.318 | 0.4372 | 0.6912 | 0.5532 | 0.6622 | 0.3262 | 0.4861 | 0.2999 | 0.4763 | 0.2363 | 0.4246 | 0.3245 | 0.4684 |
| 0.4252 | 84.0 | 8946 | 1.0618 | 0.3439 | 0.6332 | 0.3285 | 0.2312 | 0.272 | 0.5316 | 0.3148 | 0.4854 | 0.5036 | 0.3173 | 0.4412 | 0.6932 | 0.5538 | 0.6626 | 0.3153 | 0.4848 | 0.2879 | 0.4723 | 0.2455 | 0.4431 | 0.3168 | 0.4551 |
| 0.4158 | 84.9953 | 9052 | 1.0631 | 0.3484 | 0.6387 | 0.3257 | 0.2285 | 0.2694 | 0.5474 | 0.3152 | 0.4841 | 0.5047 | 0.3141 | 0.4346 | 0.6906 | 0.5595 | 0.6671 | 0.3351 | 0.4823 | 0.287 | 0.4732 | 0.2374 | 0.4354 | 0.323 | 0.4653 |
| 0.4088 | 86.0 | 9159 | 1.0561 | 0.3525 | 0.6421 | 0.3247 | 0.2286 | 0.2768 | 0.553 | 0.3143 | 0.4856 | 0.5067 | 0.311 | 0.4405 | 0.7017 | 0.5626 | 0.6694 | 0.3299 | 0.4861 | 0.2942 | 0.4719 | 0.249 | 0.4446 | 0.3266 | 0.4618 |
| 0.4194 | 86.9953 | 9265 | 1.0613 | 0.3499 | 0.6399 | 0.3269 | 0.227 | 0.2727 | 0.5463 | 0.3132 | 0.4821 | 0.5012 | 0.3048 | 0.4328 | 0.6896 | 0.562 | 0.664 | 0.3359 | 0.4911 | 0.2915 | 0.4638 | 0.2386 | 0.4246 | 0.3216 | 0.4627 |
| 0.4087 | 88.0 | 9372 | 1.0764 | 0.3484 | 0.6358 | 0.3282 | 0.222 | 0.2707 | 0.556 | 0.3145 | 0.4806 | 0.4996 | 0.2991 | 0.4302 | 0.6907 | 0.5541 | 0.6599 | 0.3344 | 0.4848 | 0.2866 | 0.4701 | 0.2455 | 0.4185 | 0.3215 | 0.4649 |
| 0.4031 | 88.9953 | 9478 | 1.0731 | 0.3494 | 0.6316 | 0.3283 | 0.2258 | 0.2685 | 0.5512 | 0.3144 | 0.4852 | 0.5048 | 0.3054 | 0.4275 | 0.7027 | 0.5574 | 0.6617 | 0.3338 | 0.4911 | 0.2962 | 0.4679 | 0.2356 | 0.4354 | 0.3238 | 0.468 |
| 0.3968 | 90.0 | 9585 | 1.0613 | 0.35 | 0.6386 | 0.3379 | 0.2254 | 0.2739 | 0.5538 | 0.3105 | 0.4855 | 0.5043 | 0.3148 | 0.4376 | 0.6953 | 0.559 | 0.6721 | 0.335 | 0.4835 | 0.292 | 0.4679 | 0.2386 | 0.4323 | 0.3253 | 0.4658 |
| 0.398 | 90.9953 | 9691 | 1.0688 | 0.345 | 0.6347 | 0.3302 | 0.2158 | 0.2666 | 0.5543 | 0.3107 | 0.4786 | 0.4958 | 0.2918 | 0.4253 | 0.6988 | 0.5566 | 0.6622 | 0.3228 | 0.4772 | 0.2946 | 0.4643 | 0.2317 | 0.4154 | 0.319 | 0.46 |
| 0.3964 | 92.0 | 9798 | 1.0728 | 0.3467 | 0.6344 | 0.3302 | 0.2195 | 0.265 | 0.5541 | 0.313 | 0.4814 | 0.5 | 0.2978 | 0.4227 | 0.7024 | 0.5625 | 0.6703 | 0.3321 | 0.4873 | 0.2908 | 0.458 | 0.2296 | 0.4246 | 0.3187 | 0.4596 |
| 0.3947 | 92.9953 | 9904 | 1.0710 | 0.3458 | 0.6355 | 0.3264 | 0.2211 | 0.267 | 0.5534 | 0.3141 | 0.4837 | 0.5022 | 0.3055 | 0.4276 | 0.7005 | 0.5555 | 0.6667 | 0.3352 | 0.4899 | 0.2911 | 0.4616 | 0.2274 | 0.4292 | 0.3198 | 0.4636 |
| 0.3903 | 94.0 | 10011 | 1.0691 | 0.3465 | 0.6375 | 0.3308 | 0.227 | 0.2659 | 0.5562 | 0.3099 | 0.4822 | 0.502 | 0.3012 | 0.431 | 0.7001 | 0.5563 | 0.6626 | 0.3354 | 0.4911 | 0.2883 | 0.4643 | 0.229 | 0.4231 | 0.3234 | 0.4689 |
| 0.3943 | 94.9953 | 10117 | 1.0708 | 0.3489 | 0.6366 | 0.3274 | 0.2278 | 0.2647 | 0.5588 | 0.3154 | 0.4838 | 0.502 | 0.3023 | 0.4258 | 0.704 | 0.5611 | 0.6653 | 0.3382 | 0.4873 | 0.2922 | 0.4643 | 0.2312 | 0.4292 | 0.3216 | 0.464 |
| 0.3855 | 96.0 | 10224 | 1.0744 | 0.349 | 0.6353 | 0.3315 | 0.2215 | 0.2689 | 0.5566 | 0.318 | 0.4813 | 0.5011 | 0.2983 | 0.431 | 0.6985 | 0.5574 | 0.664 | 0.3432 | 0.4848 | 0.2908 | 0.4621 | 0.231 | 0.4292 | 0.3224 | 0.4653 |
| 0.3833 | 96.9953 | 10330 | 1.0692 | 0.3474 | 0.6341 | 0.3282 | 0.2244 | 0.2693 | 0.5567 | 0.3167 | 0.4833 | 0.5038 | 0.3031 | 0.4318 | 0.7037 | 0.5553 | 0.6649 | 0.3348 | 0.4861 | 0.2918 | 0.4652 | 0.2323 | 0.4385 | 0.3227 | 0.4644 |
| 0.3838 | 98.0 | 10437 | 1.0700 | 0.3464 | 0.6366 | 0.3231 | 0.2234 | 0.2692 | 0.554 | 0.3155 | 0.4842 | 0.5026 | 0.3011 | 0.4319 | 0.702 | 0.5559 | 0.6658 | 0.3326 | 0.4873 | 0.2878 | 0.4594 | 0.232 | 0.4323 | 0.3239 | 0.4684 |
| 0.385 | 98.9953 | 10543 | 1.0722 | 0.3471 | 0.6369 | 0.3259 | 0.2192 | 0.2683 | 0.5535 | 0.3142 | 0.4823 | 0.5023 | 0.2976 | 0.4298 | 0.7033 | 0.5569 | 0.6676 | 0.3329 | 0.4823 | 0.2887 | 0.4612 | 0.2342 | 0.4354 | 0.3225 | 0.4649 |
| 0.3644 | 99.5305 | 10600 | 1.0727 | 0.3471 | 0.6376 | 0.3269 | 0.2239 | 0.2687 | 0.5542 | 0.3146 | 0.485 | 0.5045 | 0.3033 | 0.4315 | 0.7048 | 0.556 | 0.6671 | 0.3346 | 0.4873 | 0.2879 | 0.4625 | 0.2348 | 0.44 | 0.3221 | 0.4653 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.18.0
- Tokenizers 0.19.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "SenseTime/deformable-detr", "model-index": [{"name": "sensetime-deformable-detr-finetuned-10k-cppe5-reproduce", "results": []}]} | qubvel-hf/sensetime-deformable-detr-finetuned-10k-cppe5-reproduce | null | [
"transformers",
"safetensors",
"deformable_detr",
"object-detection",
"generated_from_trainer",
"base_model:SenseTime/deformable-detr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:21:35+00:00 |
voice-activity-detection | pyannote | This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated. | {"library_name": "pyannote", "tags": ["pyannote", "pyannote.audio", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-diarization", "speaker-change-detection", "speaker-segmentation", "voice-activity-detection", "overlapped-speech-detection", "resegmentation", "speaker-recognition", "speaker-verification", "speaker-identification", "speaker-embedding", "PyTorch", "wespeaker"], "licence": "mit"} | kamilakesbi/speaker-segmentation-test | null | [
"pyannote",
"pytorch",
"pyannote.audio",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"speaker-recognition",
"speaker-verification",
"speaker-identification",
"speaker-embedding",
"PyTorch",
"wespeaker",
"region:us"
] | null | 2024-04-29T13:21:53+00:00 |
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t01-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:21:56+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t02-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:22:14+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t03-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:22:32+00:00 |
|
text-to-audio | transformers | {} | procit001/speecht5_tts_nl_v3 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:22:33+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t04-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:22:45+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t05-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:23:00+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t06-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:23:12+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t07-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:23:25+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t08-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:23:44+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- -->
## Model Details
Shivneri Marathi LLM is being built with the wish to bring the benefits of Generative AI to non-English (especially Marathi) speaking population of India.
Marathi has the third largest number of native speakers in India, after Hindi and Bengali.
Almost 83 million people speak the language.
This is a preliminary version of our Marathi LLM (Large Language Model)!
Built on the mighty Llama3 8B instruct model, Shivneri LLM can generate creative and informative text in both Marathi and English. This is just the beginning – we're constantly improving Shivneri, and even more exciting features are on the horizon!
### Model Description
<!-- -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Amit Ghadge
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [Amit Ghadge]
- **Model type:** [ Decoder-only large language model (LLM) with a transformer architecture]
- **Language(s) (NLP):** [Marathi, English]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [Meta-Llama-3-8B-Instruct]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/amitagh/shivneri-llm]
- **Paper [optional]:** [https://www.linkedin.com/pulse/releasing-shivneri-llm-instruct-model-version-amit-ghadge-j051f/]
- **Demo [optional]:** [Coming soon]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This is a very preliminary version. Please use with caution. Would suggest to more updates and final models to try out.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[SFT with Lora on mentioned datasets above]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
SFT with Lora
### Model Architecture and Objective
[ Decoder-only large language model (LLM) with a transformer architecture]
### Compute Infrastructure
[A100 80 GB]
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Amit Ghadge](https://www.linkedin.com/in/amit-ghadge-a162a115/)
## Model Release Date May 1st, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
## License
The model inherits the license from meta-llama3.
## How to use
Use pretty much remains the same as original Meta-Llama-3-8B-Instruct model. Visit its page for more details.
With this model you can now use Marathi prompts and build conversational apps using it.
## Citation [optional]
If you use this model in your research, please cite:
```bibtex
@misc{amitghadge2024ShivneriLLMv01,
title={Shivneri-LLM: Your Bilingual Marathi and English Text Generation LLM},
author={Amit Ghadge},
year={2024},
eprint={https://www.linkedin.com/pulse/releasing-shivneri-llm-instruct-model-version-amit-ghadge-j051f/},
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Marathi language.
| {"language": ["mr", "en"], "license": "llama3", "library_name": "transformers", "datasets": ["smallstepai/marathi-instruction-tuning-alpaca", "ai4bharat/indic-align"]} | amitagh/shivneri-llm-it-v0.2-GGUF | null | [
"transformers",
"gguf",
"mr",
"en",
"dataset:smallstepai/marathi-instruction-tuning-alpaca",
"dataset:ai4bharat/indic-align",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:24:25+00:00 |
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t09-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:24:26+00:00 |
|
null | diffusers | {} | eurecom-ds/scoresdeve-conditional-ema-shapes3d-out-t099-64 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-29T13:24:38+00:00 |
|
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/codegemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-codegemma-2b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-codegemma-2b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/codegemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/codegemma-2b"} | PrunaAI/google-codegemma-2b-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/codegemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:24:44+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | atmansingh/medai | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:25:06+00:00 |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information. | {"license": "apache-2.0"} | GreenBitAI/Llama-3-8B-layer-mix-bpw-4.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:25:19+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/codegemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-codegemma-2b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-codegemma-2b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/codegemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/codegemma-2b"} | PrunaAI/google-codegemma-2b-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/codegemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:25:29+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/n8lo64m | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:25:35+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/917bjae | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:25:35+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/qriag3h | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:25:43+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/9z4ecmt | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:25:44+00:00 |
null | null | {} | kartikgawandeonline/T5-Small_AbsSumm_XSumCNN | null | [
"region:us"
] | null | 2024-04-29T13:25:45+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/aoyxw3l | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:25:50+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/e5qu8jo | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:25:54+00:00 |
null | pyannote | This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated. | {"library_name": "pyannote", "tags": ["pyannote", "pyannote.audio", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-recognition", "speaker-verification", "speaker-identification", "speaker-embedding", "PyTorch", "wespeaker"], "licence": "cc-by-4.0"} | kamilakesbi/speaker-embedding-test | null | [
"pyannote",
"pytorch",
"pyannote.audio",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-recognition",
"speaker-verification",
"speaker-identification",
"speaker-embedding",
"PyTorch",
"wespeaker",
"region:us"
] | null | 2024-04-29T13:26:15+00:00 |
automatic-speech-recognition | pyannote | This is the model card of a pyannote pipeline that has been pushed on the Hub. This model card has been automatically generated. | {"library_name": "pyannote", "tags": ["pyannote", "pyannote.audio", "pyannote-audio-pipeline", "audio", "voice", "speech", "speaker", "speaker-diarization", "speaker-change-detection", "voice-activity-detection", "overlapped-speech-detection", "automatic-speech-recognition"], "licence": "mit"} | kamilakesbi/spd_pipeline_test | null | [
"pyannote",
"pyannote.audio",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"region:us"
] | null | 2024-04-29T13:26:20+00:00 |
null | null | {} | sreya518/t5_summarizer_model | null | [
"region:us"
] | null | 2024-04-29T13:28:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# basic_train_basic_test 1000 similar params: per_device_train_batch_size=32, # bylo 16 a pod tim 1 gradient_accumulation_steps=2, warmup_steps=300, max_steps=3000
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the xbilek25/xbilek25/train_set_1000_en_de_en dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2957
- Wer: 10.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0087 | 7.02 | 500 | 0.2377 | 11.3924 |
| 0.0024 | 15.02 | 1000 | 0.2643 | 11.8029 |
| 0.0006 | 23.02 | 1500 | 0.2832 | 10.8792 |
| 0.0004 | 31.02 | 2000 | 0.2901 | 10.6055 |
| 0.0003 | 39.01 | 2500 | 0.2941 | 10.7766 |
| 0.0003 | 47.01 | 3000 | 0.2957 | 10.8108 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"language": ["multilingual"], "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "basic_train_basic_test 1000 similar params: per_device_train_batch_size=32, # bylo 16 a pod tim 1 gradient_accumulation_steps=2, warmup_steps=300, max_steps=3000", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "xbilek25/xbilek25/train_set_1000_en_de_en", "type": "mozilla-foundation/common_voice_11_0", "args": "config: csen, split: train"}, "metrics": [{"type": "wer", "value": 10.81081081081081, "name": "Wer"}]}]}]} | xbilek25/whisper-small-train-v2.2 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"multilingual",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:28:54+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthai-mistral-7b-text-to-pandas
This model is a fine-tuned version of [openthaigpt/openthai-mistral-pretrained-7B](https://huggingface.co/openthaigpt/openthai-mistral-pretrained-7B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "openthaigpt/openthai-mistral-pretrained-7B", "model-index": [{"name": "openthai-mistral-7b-text-to-pandas", "results": []}]} | kkatiz/openthai-mistral-7b-text-to-pandas | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:openthaigpt/openthai-mistral-pretrained-7B",
"region:us"
] | null | 2024-04-29T13:29:32+00:00 |
null | null |
# Model description
GMML is a self-supervised learning model that learns to group masked pixels in an image. The model is trained on ImageNet-1K.
# Model Sources
- https://github.com/Sara-Ahmed/GMML
- https://arxiv.org/abs/2205.14986
# Model Card Authors
Sara Atito, Muhammad Awais, Josef Kittler
# How to use
```python
from transformers import BertConfig, BertModel
config = BertConfig()
model = BertModel(config)
model.push_to_hub("nielsr/my-awesome-bert-model")
# reload
model = BertModel.from_pretrained("nielsr/my-awesome-bert-model")
```
# BibTeX entry and citation info
```
@inproceedings{atito2023gmml,
title={GMML is all you need},
author={Atito, Sara and Awais, Muhammed and Nandam, Srinivasa and Kittler, Josef},
booktitle={2023 IEEE International Conference on Image Processing (ICIP)},
pages={2125--2129},
year={2023},
organization={IEEE}
}
``` | {"license": "apache-2.0", "tags": ["self-supervised learning", "vision", "GMML"], "inference": false} | erow/GMML | null | [
"self-supervised learning",
"vision",
"GMML",
"arxiv:2205.14986",
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T13:29:45+00:00 |
null | null | {} | jhsim/Dacon_birds | null | [
"region:us"
] | null | 2024-04-29T13:29:47+00:00 |
|
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| **Repository (Llama 3 Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------------|:------------:|:----------:|:---------:|:-----------:|:-----------:|:---------:|:--------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
| `Llama-3-8B-layer-mix-bpw-2.2` | 0.499 | 0.302 | 0.739 | 0.674 | 0.509 | 0.396 | 0.725 | 0.743 | 0.406 | 0.327 | 0.337 | 0.340 | 0.500 |
| `Llama-3-8B-layer-mix-bpw-2.5` | 0.506 | 0.298 | 0.760 | 0.684 | 0.513 | 0.418 | 0.744 | 0.756 | 0.389 | 0.335 | 0.335 | 0.335 | 0.509 |
| `Llama-3-8B-layer-mix-bpw-3.0` | 0.523 | 0.318 | 0.770 | 0.708 | 0.540 | 0.441 | 0.767 | 0.784 | 0.407 | 0.333 | 0.345 | 0.343 | 0.526 |
| `Llama-3-8B-layer-mix-bpw-4.0` | 0.542 | 0.338 | 0.791 | 0.729 | 0.591 | 0.484 | 0.797 | 0.799 | 0.398 | 0.337 | 0.345 | 0.352 | 0.545 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.2` | 0.514 | 0.292 | 0.645 | 0.672 | 0.499 | 0.367 | 0.698 | 0.775 | 0.423 | 0.417 | 0.424 | 0.398 | 0.565 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.5` | 0.528 | 0.304 | 0.741 | 0.681 | 0.512 | 0.412 | 0.749 | 0.798 | 0.425 | 0.417 | 0.410 | 0.390 | 0.498 |
| `Llama-3-8B-instruct-layer-mix-bpw-3.0` | 0.547 | 0.316 | 0.787 | 0.690 | 0.530 | 0.459 | 0.768 | 0.800 | 0.437 | 0.435 | 0.417 | 0.387 | 0.548 |
| `Llama-3-8B-instruct-layer-mix-bpw-4.0` | 0.576 | 0.344 | 0.808 | 0.716 | 0.569 | 0.513 | 0.778 | 0.825 | 0.449 | 0.462 | 0.449 | 0.432 | 0.578 | | {"license": "apache-2.0"} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:30:05+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | lunarsylph/stablecell_v50 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:31:05+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hereis/nous1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:31:16+00:00 |
image-text-to-text | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Русскоязычная версия Idefics, обученная на русифицированном сабсете LLaVA.
SFT был без текстовых данных, так что вполне возможно просадка по качеству на text-only данных.
Обучение было в int4 с QLoRA на consumer-grade железе.
## Model Details
### Model Description
- **Model type:** ruIdefics2
- **Language(s) (NLP):** Russian
- **License:** Apache-2.0
- **Finetuned from model:** Idefics2
# How to Get Started
## Запуск в fp16
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image
DEVICE = "cuda:0"
image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")
image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")
processor = AutoProcessor.from_pretrained("GeorgeBredis/ruIdefics2-ruLLaVA-merged")
model = AutoModelForVision2Seq.from_pretrained(
"GeorgeBredis/ruIdefics2-ruLLaVA-merged",
).to(DEVICE)
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Что изображено на данной картинке?"},
]
}
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_texts)
```
Вполне возможно что это не влезет в вашу GPU (если будете загружать на gpu), так что ниже вариант с bnb для запуска в colab'e.
## Запуск в int4/int8 c bnb.
Требует установки peft
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from peft import LoraConfig
from transformers import AutoProcessor, BitsAndBytesConfig, Idefics2ForConditionalGeneration
from transformers.image_utils import load_image
DEVICE = "cuda:0"
image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")
image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")
processor = AutoProcessor.from_pretrained(
"GeorgeBredis/ruIdefics2-ruLLaVA-merged",
do_image_splitting=False
)
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
model = Idefics2ForConditionalGeneration.from_pretrained(
"GeorgeBredis/ruIdefics2-ruLLaVA-merged",
torch_dtype=torch.float16,
quantization_config=quantization_config,
)
# не нужно переносить на карту, так как в int4/8 заводятся сразу на них
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Что изображено на данной картинке?"},
]
}
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_texts)
```
| {"language": ["ru"], "license": "apache-2.0", "library_name": "transformers", "tags": ["multimodal", "vision", "image-text-to-text"], "datasets": "Vikhrmodels/LLaVA-Instruct-ru", "pipeline_tag": "image-text-to-text"} | GeorgeBredis/ruIdefics2-ruLLaVA-merged | null | [
"transformers",
"safetensors",
"idefics2",
"pretraining",
"multimodal",
"vision",
"image-text-to-text",
"ru",
"dataset:Vikhrmodels/LLaVA-Instruct-ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:31:18+00:00 |
null | null | {} | GladiusTn/llama3_ocr_to_xml_A1_guff_8 | null | [
"gguf",
"region:us"
] | null | 2024-04-29T13:33:39+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.